DELE CA2 PART A : GENERATIVE ADVERSARIAL NETWORK WITH CIFAR-10¶

Team Members : Dario Prawara Teh Wei Rong (2201858) | Lim Zhen Yang (2214506)

RESEARCH REFERENCES¶

  1. https://towardsdatascience.com/on-the-evaluation-of-generative-adversarial-networks-b056ddcdfd3a#:~:text=One%20of%20the%20most,that%20can%20be%20reviewed.
  2. https://www.techtarget.com/searchenterpriseai/definition/generative-adversarial-network-GAN
  3. https://neptune.ai/blog/generative-adversarial-networks-gan-applications
  4. https://machinelearningmastery.com/what-are-generative-adversarial-networks-gans/

BACKGROUND RESEARCH & ANALYSIS¶

In the vast field of Deep Learning, the ultimate objective is to develop models capable of effectively capturing and representing various forms of data distributions. Throughout history, the remarkable success of discriminative models, which learn to differentiate and map high-dimensional data into lower-dimensional representations, has been evident (Goodfellow et al., 2014). For instance, tasks like Image Classification exemplify discriminative modeling, where high-dimensional images are transformed into low-dimensional probability distributions over labels.

But what about generative modeling? In generative modeling, the primary aim is quite different. Instead of merely classifying or mapping data, the goal is to learn from a given data distribution and generate entirely new examples that adhere to the same distribution while maintaining uniqueness. Consequently, a high-performing generative model must produce examples that are not only recognizable and plausible in their representation but also virtually indistinguishable from real data instances (Brownlee, 2019). The realm of generative models encompasses both Unsupervised and Semi-Supervised approaches, with the choice depending on the specific task at hand.

When working with Generative Models, there are 3 different alternatives to network architectures :

  • Generative Adversarial Networks (GAN)
  • Diffusion Models
  • Variational Auto Encoders (VAE)

WHAT IS A GAN AND WHAT IS INSIDE IT?¶

A GAN, or Generative Adversarial Network, is essentially a deep learning model with two neural networks, the Generator and the Discriminator, which are trained together in a competitive manner. - Introduced by Ian Goodfellow and his colleagues in 2014.

GANs were then extended to a conditional model by conditioning both the generator and discriminator on some extra information in the 2014 Mehdi Mirza and Simon Osindero. "Conditional Generative Adversarial Nets" paper. By using a convolutional network to GANs, we got Convolutional GANs, or DCGANs. This paper also had many propositions to make the training of DCGANs stable. It created many interesting visual samples. 2015-11-19 | [Theory] Alec Radford et al. "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks".

image.png

For GANs, the generator and discriminator have differing roles. While the generator's goal is to create realistic images that appear to be from the distribution of the training images, the discriminator's goal is to determine if a given image is from the data distribution. Here is how the process looks like:

  1. Generator creates realistic images.
  2. Discriminator learns to distinguish real vs fake from a set of real images and these newly generated images.
  3. Using the updated Discriminator, Generator learns to trick the Discriminator.

Let's say we have some a random distribution or a "prior", which we can sample from. We can denote this as $p_z(z)$, where $z$ represents a vector of a specific size. This vector acts as the input to our generative model $G$. The model is ultimately described as $G(z; \theta_g)$ where $\theta_g$ represents the parameters of the generative model.

Meanwhile, our discriminator model $D$, takes in an input $x$, an image, which can be fully described as $D(x; \theta_d)$, where likewise, $\theta_d$ represents the parameters of the discriminative model. Being the discriminator, $D(x)$ returns the probability that an input $x$ is from the.

  • This "game" between the two models can be thought of as trying to optimize of minimax function of:
$$\underset{G}{\text{min}}\underset{D}{\text{max}} V(D, G) = \mathbb{E}_{x \sim p_{\text{data}}(x)} [\text{log}D(x)] + \mathbb{E}_{z \sim p_{\text{z}}(z)} [\text{log}(1 - D(G(z)))]$$

(Goodfellow et al., 2014)

Essentially, the function $V$ takes in two inputs, our models $D$ and $G$, and returns an output that has two parts. The left hand part of the sum $\mathbb{E}_{x \thicksim p_{\text{data}}(x)} [\text{log}D(x)]$ represents "the expected value that the discriminator model predicts real data is real". The right hand part of the sum $\mathbb{E}_{z \thicksim p_{\text{z}}(z)} [\text{log}(1 - D(G(z)))]$ looks at given some random vector $z$, "what is the expected value that the discriminator model predicts fake data is fake", in that the value of the right hand part is maximum when discriminator model is successful at labeling the fake images of a generator as fake.

$\underset{G}{\text{min}}\underset{D}{\text{max}}$ aims to do two things. Firstly, what is the generator model $G$ that minimizes the value, which means the discriminator labels generator's images as real. Secondly what is the disciminator model $D$ that will maximize this value, which means discriminator model predicts real images as real and fake images as fake. These two perfectly optimize each other, when the best approach the discriminator can take is to just guess randomly as the generator images are on the same realism as the real data.

APPLICATIONS OF GENERATIVE ADVERSARIAL NETWORKS¶

As GANs are generative models, there are a large number of applications of GANs (Brownlee, 2019). Below are a few areas where GANs can be applied to :

  • Time Series
  • Image Generation
  • Audio / Music Generation
  • Style Transfer Application (E.g. Winter Photo to Summer Photo)

OUR PROJECT OBJECTIVE¶

Before we begin our analysis, let us take a look at our project's objective.

Apply some suitable GAN architectures to the problem of image generation using the CIFAR-10 dataset and generate 1000 snall colour images.

image-2.png

The CIFAR-10 dataset consists of 60000 32x32 colour images in a total of 10 classes, with 6000 images per class (Krizhevsky, 2009).

DEVELOPING THE GAN MODELS¶

Let's get down to building our Conditional GANs.

Why Conditional GANs? As mentioned during the above background research section, Conditional GANs are. On top of this, Conditional GANs allow the user to control the output.

Besides, if one appreciates the idea of not knowing what a Vanilla GAN may output, a simple random module can be attached as the conditional information to somewhat hack the Conditional GAN into being a normal GAN.

WHAT WILL BE DONE FOR GAN¶

Here, we will identify the tasks and objectives we want to meet, which will be used as a guide throughout the development of our GAN models.

  1. Explore the CIFAR-10 Dataset (EDA / Feature Engineering)
  2. Implement and Evaluate to find the Best Performing Model
  3. Make Model Improvements
  4. Analyse the Final Model and Make Conclusions

INITIALIZING MODULES AND LIBRARIES¶

  • Import necessary libraries for pre-processing, data exploration, feature engineering and model evaluation.

  • Some libraries used include tensorflow, matplotlib, and scikit-learn.

In [140]:
# Deep Learning Libraries for Model Building
import tensorflow as tf
from tensorflow.data import Dataset
from IPython.display import display

# Data Processing and Splitting Libraries
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
import scipy
from keras.utils import to_categorical

# Utility and Miscellaneous Libraries
import logging
from datetime import datetime
import os
import math
from tqdm import tqdm
from scipy.linalg import sqrtm
import tensorflow_probability as tfp

# Import TensorFlow and Keras
import tensorflow as tf
from tensorflow import keras
from tensorflow.data import Dataset
from tensorflow.keras import backend as K
from keras.applications.inception_v3 import InceptionV3, preprocess_input
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import (
    Input, Dense, Conv2D, Conv2DTranspose, Embedding, Reshape, Flatten,
    Dropout, BatchNormalization, ReLU, LeakyReLU, MaxPooling2D, Concatenate,
    PReLU
)
from tensorflow.keras.losses import BinaryCrossentropy, SparseCategoricalCrossentropy
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.metrics import Metric
from tensorflow.keras.callbacks import Callback
from tensorflow.keras.utils import plot_model
from keras.callbacks import Callback
from tensorflow_addons.layers import SpectralNormalization
from keras.initializers import RandomNormal

# Import Libraries for Scoring Metrics
from math import floor
from numpy import ones
from numpy import expand_dims
from numpy import log
from numpy import mean
from numpy import std
from numpy import exp
from numpy.random import shuffle
from keras.applications.inception_v3 import InceptionV3
from keras.applications.inception_v3 import preprocess_input
from keras.datasets import cifar10
from skimage.transform import resize
from numpy import asarray
from numpy import cov
from tensorflow.python.ops.numpy_ops import np_config
np_config.enable_numpy_behavior()
import glob
import imageio
from IPython.display import HTML

# Ignore warnings
import warnings
warnings.filterwarnings("ignore")

PERFORM CHECK FOR GPU¶

  • Ensure GPU can be found when using tf.config and list_physical_devices.
  • Avoid OOM errors by setting GPU memory consumption growth.
In [2]:
# Avoid OOM errors by setting GPU Memory Consumption Growth
gpus = tf.config.experimental.list_physical_devices('GPU')
for gpu in gpus:
    tf.config.experimental.set_memory_growth(gpu, True)
    
# Show the GPU
gpu
Out[2]:
PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')

SETTING RANDOM SEED FOR REPRODUCTIBILITY¶

  • To ensure reproductibility of results, we will be setting random seed to ensure results are reproducible.
In [3]:
tf.random.set_seed(42)

SETTING CHART CUSTOMIZATIONS FOR EDA¶

  • Before loading the CIFAR-10 dataset of colored images, we will be setting chart customizations in Seaborn to ensure a consistent and uniformed layout for our charts in this notebook.
In [4]:
# Change theme of charts
sns.set_theme(style='darkgrid')

# Change font of charts
sns.set(font='Century Gothic')

# Variable for color palettes
color_palette = sns.color_palette('muted')

LOADING AND IMPORTING THE CIFAR-10 DATASET¶

  • We will be importing the Cifar-10 Dataset using Keras datasets with this line of code tf.keras.datasets.cifar10.load_data() to load the data in.
  • First, we will visualize the shape of the data by unpacking it into the training and testing sets respectively.
In [5]:
# Load the dataset and unpack it into training and testing data
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()

# Print the shapes of the data
print(f"Training Data:\nX_train shape: {x_train.shape}, y_train shape: {y_train.shape}")
print(f"\nTesting Data:\nX_test shape: {x_test.shape}, y_test shape: {y_test.shape}")
print(f"\nNumber of Class Labels: {len(np.unique(y_train))}")
Training Data:
X_train shape: (50000, 32, 32, 3), y_train shape: (50000, 1)

Testing Data:
X_test shape: (10000, 32, 32, 3), y_test shape: (10000, 1)

Number of Class Labels: 10

WHAT CAN WE SEE FROM THE CIFAR-10 DATASET IMPORT?

  • From our results, we see that there are a total of 60000 rows for both the training and testing data combined.
  • In the X variables, we also see that each image is a 32 x 32 image with a 3 color channel (RGB). This indicates that the images provided are colored images.

DEFINING X AND Y VARIABLES WITH TRAIN DATA

  • Since testing of GANs typically involves generating new images without the need of a separate test data set, we will only be using the training set to train our GAN models, which consists of 50000 rows.

So, we will be using X and y for training of our GAN models:

X : uint8 NumPy array of coloured image data with shapes (50000, 32, 32, 3). Pixel values range from 0 to 255.

y : uint8 NumPy array of labels (integers in range 0-9) with shape (50000, 1).

In [6]:
# Concatenating the train and test data into X and y
X = x_train
y = y_train

print(f"X shape: {X.shape}")
print(f"y shape: {y.shape}")
X shape: (50000, 32, 32, 3)
y shape: (50000, 1)

CHECKING FOR NULL VALUES IN THE DATASET¶

  • Now, we will be checking for any null values using .isnull().
  • From our analysis, we can see that there are no null values in both X and y. Hence, we will be proceeding with EDA.
In [7]:
# Checking for any NULL values
def check_for_nulls(data, name):
    if np.isnan(data).any():
        print(f"There are null values in {name}.")
    else:
        print(f"There are 0 null values in {name}.")

check_for_nulls(X, "X")
check_for_nulls(y, "y")
There are 0 null values in X.
There are 0 null values in y.

EXPLORATORY DATA ANALYSIS¶

  • Now, we will move on to conducting exploratory data analysis of the data, to gain a better understanding of what we can find in the dataset and its characteristics.

In this section, we will be exploring class distribution, pixel distribution and others. Here is what will be covered in our EDA process:

  1. Visualizing the Image Dataset
  2. Visualizing the Class Distributions
  3. Visualizing Class Colour Distributions
  4. Visualizing Image Pixel Distribution
  5. Image Averaging

Each number represents a different item in the CIFAR-10 dataset :

Class (0 - 9) Labels
0 Airplane
1 Automobile
2 Bird
3 Cat
4 Deer
5 Dog
6 Frog
7 Horse
8 Ship
9 Truck

First, let us define our class labels that can be found in our dataset before we proceed with visualizing the images in our data.

In [8]:
# Instantiating the class labels of our dataset
class_labels = {
    0: 'Airplane',
    1: 'Automobile',
    2: 'Bird',
    3: 'Cat',
    4: 'Deer',
    5: 'Dog',
    6: 'Frog',
    7: 'Horse',
    8: 'Ship',
    9: 'Truck'
}

1.1 VISUALIZING THE IMAGE DATASET - CIFAR-10

From the images generated, we can see that all the images below are correctly labelled. Visually, although the images are quite blurry due to the image size being 32x32, it is still easy to tell the diffference between the classes.

However, upon further inspection, we see that there may be some problems our models could face when generating the images.

  • Diverse Variety of Animal Species : Aside from automobiles and vehicles, the dataset contains a wide range of animal species, including birds like ostriches and sparrows. This diversity presents a challenge as the model might not be able to clearly differentiate the different species and generate an amalgamation instead.

  • Image Zoom Levels : Another challenge lies with the variation in image zoom levels. Many images exhibit different sizes, particularly for cats and dogs. Some images are tightly focused on the animal's face, while others are more zoomed out, capturing the entire body of the animal. This variation in zoom levels can impact the model's ability to maintain consistent image generation quality.
In [9]:
# Visualizing a subset of the CIFAR-10 Dataset
fig, ax = plt.subplots(10, 10, figsize=(30, 30))

for i in range(10):
    images = X[np.squeeze(y == i)]
    random_index = np.random.choice(images.shape[0], 10, replace=False)
    images = images[random_index]
    label = class_labels[i]
    
    for j in range(10):
        subplot = ax[i, j]
        subplot.axis("off")
        subplot.imshow(images[j])
        subplot.set_title(label)

plt.show()
No description has been provided for this image

1.2 VISUALIZING THE CLASS DISTRIBUTIONS

  • When training deep learning models, it is crucial to check the distribution of the image classes in the dataset, enabling us to understand the best metrics to use and evaluate if anything is needed to balance the classes.
In [10]:
# Visualizing class distributions in the dataset

# Count the images for each label
labels, counts = np.unique(y, return_counts = True)
for label, count in zip(labels, counts):
    print(f"{class_labels[label]}: {count}")
    
# Display a barchart displaying the counts
plt.barh(labels, counts, tick_label=list(class_labels.values()))
plt.show()
Airplane: 5000
Automobile: 5000
Bird: 5000
Cat: 5000
Deer: 5000
Dog: 5000
Frog: 5000
Horse: 5000
Ship: 5000
Truck: 5000
No description has been provided for this image

ANALYSIS OF THE CLASS DISTRIBTIONS

From the bar graph and count displayed, it is apparent that there is a no imbalance at all found in the data with all classes containing exactly 5000 images, and the distribution of the image classes are even. Hence, this shows no class resampling will be needed as there is no particular bias to any specific class.

1.3 VISUALIZING CLASS COLOR DISTRIBUTIONS

  • Our models see these images differently from humans, they only understand the numbers, specifically they "see" these images as numbers/pixel values.
  • Our model will take in training examples and noise from some distribution to learn a transformation to the data distribution. Hence, viewing each classes distribution can help us understand and visualize what our model will be trying to learn.
In [11]:
bins = 32

fig, axes = plt.subplots(nrows=2, ncols=5, figsize=(20, 5), sharex=True, sharey=True)

n_labels = len(np.unique(y))

for i, ax in zip(range(n_labels), axes.flat):
    idx = np.where(y == i)[0]
    ax.hist(X[idx, ..., 0].ravel(), bins=bins, color='r', alpha=.7)
    ax.hist(X[idx, ..., 1].ravel(), bins=bins, color='g', alpha=.7)
    ax.hist(X[idx, ..., 2].ravel(), bins=bins, color='b', alpha=.7)
    ax.set_title(class_labels[i])

fig.legend(['Red', 'Green', 'Blue'], loc='upper right', fontsize=12, ncol=3, bbox_to_anchor=(0.592, 1.0), frameon=False)
fig.suptitle('RGB Distribution by Class', fontsize=16, y=1.05)
plt.tight_layout()
plt.show()
No description has been provided for this image

1.4 VISUALIZING IMAGE PIXEL DISTRIBUTION

  • In this section, we will visualize the pixel intensity and find out the distribution of the pixel values.
  • To visualize it, we will plot the distribution of brightness to evaluate the distribution of pixel values.
In [12]:
# Visualizing the pixel values
print("Pixel Values:")
print("Max: ", np.max(X))
print("Min: ", np.min(X))

# Calculating the mean and standard deviation of the pixels
print("\nMean and Standard Deviation of Pixels for Images:")
print("Mean: ", np.mean(X, axis = (0, 1, 2)))
print("Standard Deviation: ", np.std(X, axis = (0, 1, 2)))
Pixel Values:
Max:  255
Min:  0

Mean and Standard Deviation of Pixels for Images:
Mean:  [125.30691805 122.95039414 113.86538318]
Standard Deviation:  [62.99321928 62.08870764 66.70489964]

ANALYSIS OF THE PIXEL DISTRIBUTIONS

Pixel Value
Based on the pixel values, as expected, the pixel distribution ranges from 0 to 255 for colored images.

Mean Pixel Value
The mean pixel values represent the average brightness for each color channel (red, green, blue) across all images in the CIFAR-10 dataset. Since the red channel has the highest mean value, it suggests that our images tend to have brighter red colors on average. Meanwhile, the blue has the lowest mean value, indicating that blue colors are less intense on average.

Standard Deviation Pixel Value
The standard deviation measures the spread or variability of pixel intensities within each color channel. Since the blue channel has the highest STD, it indicates that there is a greater variability in its pixel values. However, with green having the lowest variation, it suggests that the green channel has slightly more consistency in pixel values.

1.5 IMAGE AVERAGING OF PIXELS

We will now perform image averaging. This involves stacking of multiple photos on top of each other and averaging them together. We do this to see the noise of the image as well as observe all images in the dataset.

  • From our image averaging, we observe that there is no significance in the average of all images and cannot make anything out from the image, likely due to the color of the images overlaying each other to give this blur effect.
  • However, if looked closely at, one can see that there is a slight tinge of red color in the middle of the frame (nearly unnoticeable), which is likely due to the red channel having the highest mean value so the images have brighter red colors on average.
In [13]:
# Displaying the image averaging of all images combined
plt.imshow(np.mean(X, axis=0) / 255)
plt.grid(False) 
plt.title('Image Averaging of Pixels')
plt.show()
No description has been provided for this image

Now, we will be viewing the average image among each of the classes. First, we will split by the classes before finding the average among each of the classes.

In [14]:
# Creating the image averaging
fig, ax = plt.subplots(2, 5, figsize=(25, 10))

for idx, subplot in enumerate(ax.ravel()):
    avg_image = np.mean(X[np.squeeze(y == idx)], axis=0) / 255
    subplot.imshow(avg_image)
    subplot.set_title(f"{class_labels[idx]}", fontsize=20)
    subplot.axis("off")

plt.show()
No description has been provided for this image

ANALYZING THE AVERAGE IMAGE FOR EACH CLASS LABEL

  • When examining the average images for each class, we can see that the average images are blurry and unclear. However, we can still make out the images of an automobile, horse and truck fairly clearly.
  • One interesting fact to note is that the average horse image seems to have a more dominant presence when it faces the left, which is quite intriguing.
  • However, for most of the classes, while we are able to make out some form of contour in the center of the image, it is still very pixelated and extremely blurry.

FEATURE ENGINEERING AND PREPARATION OF DATA¶

Now, after performing EDA, we will be working on feature engineering to further prepare the data for model building. In this section, we will be looking into normalization and transforming the X and y data. Below are the steps we will work on:

  • Converting to TensorFlow Dataset & Normalizing Images to [-1, 1]
  • Creating a Batch of Real Images

CONVERTING TO TF DATASET AND NORMALIZING IMAGES

  • We need to normalize our images to the [-1, 1] range as our generators will be using the tanh() activation function.
  • We use the TensorFlow Dataset function to store X and y as tensor slices.
In [11]:
BUFFER_SIZE = 10_000
BATCH_SIZE = 128

dataset = tf.data.Dataset.from_tensor_slices((X, y))
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE)

# Normalize images to range [-1, 1] as generator will be using tanh activation
dataset = dataset.map(lambda x, y: (tf.cast(x, tf.float32) / 127.5 - 1, y))
image_spec, label_spec = dataset.element_spec 

print(type(image_spec))
print(image_spec)
print(label_spec)
<class 'tensorflow.python.framework.tensor_spec.TensorSpec'>
TensorSpec(shape=(None, 32, 32, 3), dtype=tf.float32, name=None)
TensorSpec(shape=(None, 1), dtype=tf.uint8, name=None)

INSPECT THE SHAPE OF INPUT IMAGES AND LABELS IN EACH DATA BATCH

  • Upon inspection, we see that the batch sizes are indeed 128, and the images are within the size 32, 32, 3, indicating that the batch processing and normalization of images was successful.
In [12]:
for batch in dataset.take(1):
    image, label = batch
    print(f'Image shape: {image.shape}')
    print(f'Label shape: {label.shape}')
Image shape: (128, 32, 32, 3)
Label shape: (128, 1)

CREATING A BATCH OF REAL IMAGES

  • For this step, we do it as it will be used later for the computation of our FID scores, hence we will process the data here and apply it in the later stages of the notebook.
In [13]:
# Create an iterator for the dataset and get one batch of images
iterator = iter(dataset)
one_batch = next(iterator)

# Split the batch into images and labels
images, labels = one_batch

# Convert the images to a NumPy array
real_images = images[:100]
real_images.shape
Out[13]:
TensorShape([100, 32, 32, 3])

MODEL BUILDING AND DEVELOPMENT¶

To solve this GAN task, we will be making use of a few GAN architectures.

  • DCGAN - Deep Convolutional Generative Adversarial Network - Baseline
  • cDCGAN - Conditional Generative Adversarial Network + Gradient Tape
  • SNGAN - Spectral Normalization GAN
  • ACGAN - Auxilliary Classifier Generative Adversarial Network

Before building our models, we will discuss our evaluation techniques and how we will assess our model performance.

MODEL EVALUATION METHODS AND TECHNIQUES¶

There are many model evaluation models and techniques for GAN. In this analysis, we will mainly be looking at 3 types of evaluations.

Manual GAN Evaluation

Manually inspecting and judging the generated examples from different iteration steps. However, there are a few limitations to this method :

  1. It is subjective and includes the biases of the reviewer.
  2. It requires domain knowledge to tell what is realistic and what is not. In our specific case, it is extremely important to count on the aid of dermatologists who can assess the fake examples.
  3. It is limited in terms of the number of images that can be reviewed.
  4. It cannot be used for early stopping.

Qualitative GAN Evaluation

By identifying non numerical data in the images, we are able to use human subject evaluation or evaluation via comparison. Some techniques include using :

  1. Nearest Neighbors - Detect overfitting, generated samples are shown next to nearest neighbour in the training set
  2. Rapid Scene Categorization - Get participants to help classify real and fake data

Quantitative GAN Evaluation

Evaluating GAN on quantitative means refers to using a specific calculation of specific numerical scores, used to summarize the general quality of the images. Some metrics used will include :

  1. Fréchet Inception Distance - Measures the Fréchet distance between two multivariate Gaussian distributions fitted to the feature representations of real and generated samples.
  2. Inception Score - Measure of how realistic the generated image is.
  3. Kullback-Leibler Divergence (KL D) - Measures the distance between two statistical probability.

For the case of this analysis, we will mainly be using qualitative and quantitative GAN evaluation strategies to analyse the performance of our models.

DEFINING EVALUATION FUNCTIONS AND CLASSES FOR MODEL BUILDING¶

Now, we will be defining evaluation functions to be used for model building later on. Some functions and classes developed include :

  • Computing the FID Score and Inception Score
  • Plotting the Loss and FID Metrics
  • Storing the FID and IS results for each model
In [105]:
# Scale an array of images to a new size
def scale_images(images, new_shape):
    images = images.numpy()
    images_list = list()
    for image in images:
        image = (image + 1) * 127.5  # Reverse tanh normalization
        # Resize with nearest neighbor interpolation
        new_image = resize(image, new_shape, 0)
        # Store
        images_list.append(new_image)
    return asarray(images_list)

# Function to calculate Inception Score
def calculate_inception_score(images, n_split=10, eps=1E-16):
    # Load InceptionV3 model
    model = InceptionV3()
    # Enumerate splits of images/predictions
    scores = list()
    n_part = floor(images.shape[0] / n_split)
    for i in range(n_split):
        # Retrieve images
        ix_start, ix_end = i * n_part, (i + 1) * n_part
        subset = images[ix_start:ix_end]
        # Convert from uint8 to float32
        subset = subset.astype('float32')
        # Scale images to the required size
        subset = scale_images(subset, (299, 299, 3))
        # Pre-process images, scale to [-1, 1]
        subset = preprocess_input(subset)
        # Predict p(y|x)
        p_yx = model.predict(subset)
        # Calculate p(y)
        p_y = expand_dims(p_yx.mean(axis=0), 0)
        # Calculate KL divergence using log probabilities
        kl_d = p_yx * (log(p_yx + eps) - log(p_y + eps))
        # Sum over classes
        sum_kl_d = kl_d.sum(axis=1)
        # Average over images
        avg_kl_d = mean(sum_kl_d)
        # Undo the log
        is_score = exp(avg_kl_d)
        # Store
        scores.append(is_score)
    # Average across images
    is_avg, is_std = mean(scores), std(scores)
    return is_avg, is_std

# Function to calculate Frechet Inception Distance
def calculate_fid(fake_images):
    model = InceptionV3(include_top=False, pooling='avg', input_shape=(299, 299, 3))

    iterator = iter(dataset)
    one_batch = next(iterator)
    real_images, _ = one_batch
    real_images = real_images[:100]

    # Calculate activations
    real_images = real_images.astype('float32')
    fake_images = fake_images.astype('float32')

    real_images = scale_images(real_images, (299, 299, 3))
    fake_images = scale_images(fake_images, (299, 299, 3))

    real_images = preprocess_input(real_images)
    fake_images = preprocess_input(fake_images)

    act1 = model.predict(fake_images)
    act2 = model.predict(real_images)
    # Calculate mean and covariance statistics
    mu1, sigma1 = act1.mean(axis=0), cov(act1, rowvar=False)
    mu2, sigma2 = act2.mean(axis=0), cov(act2, rowvar=False)
    # Calculate sum squared difference between means
    ssdiff = np.sum((mu1 - mu2)**2.0)
    # Calculate sqrt of product between cov
    covmean = sqrtm(sigma1.dot(sigma2))
    # Check and correct imaginary numbers from sqrt
    if np.iscomplexobj(covmean):
        covmean = covmean.real
    # Calculate score
    fid = ssdiff + np.trace(sigma1 + sigma2 - 2.0 * covmean)
    return fid


# Function for plotting the model performance
def plot_model_performance(history):
    fig = plt.figure(figsize=(13,8))
    gs = fig.add_gridspec(2,2)
    ax1 = fig.add_subplot(gs[0, 0])
    ax2 = fig.add_subplot(gs[0, 1])
    ax3 = fig.add_subplot(gs[1, :])
    
    # Plot KL Divergence
    ax1 = plt.subplot(gs[0, 0])
    ax1.plot(history.history['kl_divergence'], label='KL Divergence', linewidth=3, color='red')
    ax1.legend()
    ax1.set_title('KL Divergence', fontsize=12, fontweight='bold')
    ax1.set_xlabel('Epoch', fontsize=12)
    ax1.set_ylabel('KL Divergence', fontsize=12)
    
    # Plot discriminator accuracy
    ax2 = plt.subplot(gs[0, 1])
    ax2.plot(history.history['d_acc'], label='Discriminator Accuracy', linewidth=3, color='green')
    ax2.legend()
    ax2.set_title('Discriminator Accuracy', fontsize=12, fontweight='bold')
    ax2.set_xlabel('Epoch', fontsize=12)
    ax2.set_ylabel('Accuracy', fontsize=12)
    
    # Plot generator and discriminator losses
    ax3 = plt.subplot(gs[1, :])
    ax3.plot(history.history['g_loss'], label='Generator Loss', linewidth=3)
    ax3.plot(history.history['d_real_loss'], label='Discriminator Real Loss', linewidth=3)
    ax3.plot(history.history['d_fake_loss'], label='Discriminator Fake Loss', linewidth=3)
    ax3.legend()
    ax3.set_title('Generator and Discriminator Losses', fontsize=12, fontweight='bold')
    ax3.set_xlabel('Epoch', fontsize=12)
    ax3.set_ylabel('Loss', fontsize=12)   

    plt.tight_layout()
    plt.show()
    
# Function for plotting the model performance for ACGAN (Include AUX Loss)
def plot_model_performance_acgan(history):
    fig = plt.figure(figsize=(13,8))
    gs = fig.add_gridspec(2,2)
    ax1 = fig.add_subplot(gs[0, 0])
    ax2 = fig.add_subplot(gs[0, 1])
    ax3 = fig.add_subplot(gs[1, :])
    
    # Plot KL Divergence
    ax1 = plt.subplot(gs[0, 0])
    ax1.plot(history.history['kl_divergence'], label='KL Divergence', linewidth=3, color='red')
    ax1.legend()
    ax1.set_title('KL Divergence', fontsize=12, fontweight='bold')
    ax1.set_xlabel('Epoch', fontsize=12)
    ax1.set_ylabel('KL Divergence', fontsize=12)
    
    # Plot discriminator accuracy
    ax2 = plt.subplot(gs[0, 1])
    ax2.plot(history.history['d_acc'], label='Discriminator Accuracy', linewidth=3, color='green')
    ax2.legend()
    ax2.set_title('Discriminator Accuracy', fontsize=12, fontweight='bold')
    ax2.set_xlabel('Epoch', fontsize=12)
    ax2.set_ylabel('Accuracy', fontsize=12)
    
    # Plot generator and discriminator losses
    ax3 = plt.subplot(gs[1, :])
    ax3.plot(history.history['g_loss'], label='Generator Loss', linewidth=3)
    ax3.plot(history.history['d_loss'], label='Discriminator Loss', linewidth=3)
    ax3.plot(history.history['aux_loss'], label='Auxilliary Loss', linewidth=3)
    ax3.legend()
    ax3.set_title('Generator, Discriminator & Auxilliary Losses', fontsize=12, fontweight='bold')
    ax3.set_xlabel('Epoch', fontsize=12)
    ax3.set_ylabel('Loss', fontsize=12)   

    plt.tight_layout()
    plt.show()

MODEL 1 : DCGAN MODEL - DEEP CONVOLUTIONAL GAN (BASELINE)¶

image.png

Architectural Guidelines for Stable Deep Convolutional GANs :

  • Employ an all-convolutional network approach:
    • This strategy is employed in our generator, enabling it to autonomously learn spatial upsampling and in the ac_discriminator.
  • Substitute pooling layers with strided convolutions (for the discriminator) and fractional-strided convolutions (for the generator).
  • Implement batch normalization in both the generator and discriminator networks.
    • However, we should take note that directly applying batch normalization to all layers can lead to issues such as sample oscillation and model instability.
  • Eliminate fully connected hidden layers when dealing with deeper architectures.
  • Utilize ReLU activation functions in the generator for all layers, except for the output layer, which should employ Tanh activation.
  • Incorporate LeakyReLU activation functions in the ac_ac_discriminator for all layers.

Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv:1511.06434.

BUILDING THE DCGAN GENERATOR FUNCTION

In [19]:
def create_generator(latent_dim):
    
    # Define the Sequential Model
    model = Sequential(name='DCGAN_Generator')
    
    # Foundation for 4x4 image
    n_nodes = 256 * 4 * 4
    model.add(Dense(n_nodes, input_dim=latent_dim))
    model.add(LeakyReLU(alpha=0.2))
    model.add(Reshape((4, 4, 256)))
    
    # Upsample to 8x8
    model.add(Conv2DTranspose(128, (4, 4), strides=(2, 2), padding='same'))
    model.add(LeakyReLU(alpha=0.2))
    
    # Upsample to 16x16
    model.add(Conv2DTranspose(128, (4, 4), strides=(2, 2), padding='same'))
    model.add(LeakyReLU(alpha=0.2))
    
    # Upsample to 32x32
    model.add(Conv2DTranspose(128, (4, 4), strides=(2, 2), padding='same'))
    model.add(LeakyReLU(alpha=0.2))
    
    # Output layer
    model.add(Conv2D(3, (3, 3), activation='tanh', padding='same'))
    return model
In [20]:
create_generator(latent_dim=100).summary()
Model: "DCGAN_Generator"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense (Dense)               (None, 4096)              413696    
                                                                 
 leaky_re_lu (LeakyReLU)     (None, 4096)              0         
                                                                 
 reshape (Reshape)           (None, 4, 4, 256)         0         
                                                                 
 conv2d_transpose (Conv2DTra  (None, 8, 8, 128)        524416    
 nspose)                                                         
                                                                 
 leaky_re_lu_1 (LeakyReLU)   (None, 8, 8, 128)         0         
                                                                 
 conv2d_transpose_1 (Conv2DT  (None, 16, 16, 128)      262272    
 ranspose)                                                       
                                                                 
 leaky_re_lu_2 (LeakyReLU)   (None, 16, 16, 128)       0         
                                                                 
 conv2d_transpose_2 (Conv2DT  (None, 32, 32, 128)      262272    
 ranspose)                                                       
                                                                 
 leaky_re_lu_3 (LeakyReLU)   (None, 32, 32, 128)       0         
                                                                 
 conv2d (Conv2D)             (None, 32, 32, 3)         3459      
                                                                 
=================================================================
Total params: 1,466,115
Trainable params: 1,466,115
Non-trainable params: 0
_________________________________________________________________

BUILDING THE DCGAN DISCRIMINATOR

No description has been provided for this image

DCGAN typically uses strided convolutions, instead of pooling layers, to increase the spatial resolution of the generated images. In the discriminator network, it uses a deep convolutional architecture to learn a rich and diverse set of features that can distinguish real images from fake images.

The discriminator takes an image as input, and tries to classify it as "real" or "generated". In this sense, it's like any other binary image classification neural network. We'll use a convolutional neural network (CNN) which outputs a single number for each image. We'll use a stride of 2 to progressively reduce the size of the output feature map, until we have only 1 output.

No description has been provided for this image

We are using Leaky ReLU for the discriminator. This is different from a regular ReLU function as it allows negative value to be passed as a small gradient signal to the next layer. This prevents a 'dying state' by allowing some negative values to pass through. The whole idea behind making the Generator work is to receive gradient values from the Discriminator, and if the network is stuck in a dying state situation, the learning process won’t happen. From this article

In [21]:
def create_discriminator(in_shape=(32, 32, 3)):
    
    # Define the Sequential Model
    model = Sequential(name='DCGAN_Discriminator')
    
    # Normal
    model.add(Conv2D(64, (3, 3), padding='same', input_shape=in_shape))
    model.add(LeakyReLU(alpha=0.2))
    
    # Downsample
    model.add(Conv2D(128, (3, 3), strides=(2, 2), padding='same'))
    model.add(LeakyReLU(alpha=0.2))
    
    # Downsample
    model.add(Conv2D(128, (3, 3), strides=(2, 2), padding='same'))
    model.add(LeakyReLU(alpha=0.2))
    
    # Downsample
    model.add(Conv2D(256, (3, 3), strides=(2, 2), padding='same'))
    model.add(LeakyReLU(alpha=0.2))
    
    # Classifier
    model.add(Flatten())
    model.add(Dropout(0.4))
    model.add(Dense(1, activation='sigmoid'))
    
    # Compile model
    opt = Adam(learning_rate=0.0002, beta_1=0.5)
    model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy'])
    return model
In [22]:
create_discriminator().summary()
Model: "DCGAN_Discriminator"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_1 (Conv2D)           (None, 32, 32, 64)        1792      
                                                                 
 leaky_re_lu_4 (LeakyReLU)   (None, 32, 32, 64)        0         
                                                                 
 conv2d_2 (Conv2D)           (None, 16, 16, 128)       73856     
                                                                 
 leaky_re_lu_5 (LeakyReLU)   (None, 16, 16, 128)       0         
                                                                 
 conv2d_3 (Conv2D)           (None, 8, 8, 128)         147584    
                                                                 
 leaky_re_lu_6 (LeakyReLU)   (None, 8, 8, 128)         0         
                                                                 
 conv2d_4 (Conv2D)           (None, 4, 4, 256)         295168    
                                                                 
 leaky_re_lu_7 (LeakyReLU)   (None, 4, 4, 256)         0         
                                                                 
 flatten (Flatten)           (None, 4096)              0         
                                                                 
 dropout (Dropout)           (None, 4096)              0         
                                                                 
 dense_1 (Dense)             (None, 1)                 4097      
                                                                 
=================================================================
Total params: 522,497
Trainable params: 522,497
Non-trainable params: 0
_________________________________________________________________

BUILDING THE TRAINING FUNCTIONS AND CLASSES FOR DCGAN

In [23]:
class DCGAN(Model):
    def __init__(self, generator, discriminator, latent_dim):
        super(DCGAN, self).__init__()
        self.generator = generator
        self.discriminator = discriminator
        self.latent_dim = latent_dim

    def compile(self, d_optimizer, g_optimizer, loss_fn):
        super(DCGAN, self).compile()
        self.d_optimizer = d_optimizer
        self.g_optimizer = g_optimizer
        self.loss_fn = loss_fn
        self.g_loss_metric = keras.metrics.Mean(name='g_loss')
        self.d_real_loss_metric = keras.metrics.Mean(name='d_real_loss')
        self.d_fake_loss_metric = keras.metrics.Mean(name='d_fake_loss')
        self.d_acc_metric = keras.metrics.BinaryAccuracy(name='d_acc')
        self.kl_metric = keras.metrics.KLDivergence()

    @property
    def metrics(self):
        return [self.g_loss_metric, self.d_real_loss_metric, self.d_fake_loss_metric, self.d_acc_metric]

    def train_step(self, data):
        real_images, _ = data
        batch_size = tf.shape(real_images)[0]

        # Train discriminator
        random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))
        fake_labels = tf.zeros((batch_size, 1))
        real_labels = tf.ones((batch_size, 1))

        # Freeze generator
        self.discriminator.trainable = True
        self.generator.trainable = False

        with tf.GradientTape() as disc_tape:
            generated_images = self.generator(random_latent_vectors, training=True)
            real_output = self.discriminator(real_images, training=True)
            fake_output = self.discriminator(generated_images, training=True)
            d_loss_real = self.loss_fn(real_labels, real_output)
            d_loss_fake = self.loss_fn(fake_labels, fake_output)
            d_loss = d_loss_real + d_loss_fake

        disc_grads = disc_tape.gradient(d_loss, self.discriminator.trainable_variables)
        self.d_optimizer.apply_gradients(zip(disc_grads, self.discriminator.trainable_variables))

        # Train generator twice
        for _ in range(2):
            random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))
            misleading_labels = tf.ones((batch_size, 1))

            # Freeze discriminator
            self.discriminator.trainable = False
            self.generator.trainable = True

            with tf.GradientTape() as gen_tape:
                generated_images = self.generator(random_latent_vectors, training=True)
                pred_on_fake = self.discriminator(generated_images, training=True)
                g_loss = self.loss_fn(misleading_labels, pred_on_fake)
            
            gen_grads = gen_tape.gradient(g_loss, self.generator.trainable_variables)
            self.g_optimizer.apply_gradients(zip(gen_grads, self.generator.trainable_variables))

        # Update metrics
        self.g_loss_metric.update_state(g_loss)
        self.d_real_loss_metric.update_state(d_loss_real)
        self.d_fake_loss_metric.update_state(d_loss_fake)
        self.d_acc_metric.update_state(real_labels, real_output)
        self.kl_metric.update_state(y_true=real_images, y_pred=generated_images)

        return {
            'g_loss': self.g_loss_metric.result(),
            'd_real_loss': self.d_real_loss_metric.result(),
            'd_fake_loss': self.d_fake_loss_metric.result(),
            'd_acc': self.d_acc_metric.result(),
            'kl_divergence': self.kl_metric.result(),
        }
In [24]:
class GANMonitor(Callback):
    def __init__(self, latent_dim):
        self.latent_dim = latent_dim
        self.fid_scores = []
        self.is_scores = []

    def on_epoch_end(self, epoch, logs=None):
        # plot 100 generated images and save weights every 10 epochs
        latent_vectors = tf.random.normal(shape=(100, self.latent_dim))

        generated_images = self.model.generator(latent_vectors, training=False)
        generated_images = (generated_images + 1) / 2

        if not os.path.exists('modelweights/dcgan'):
            os.makedirs('modelweights/dcgan')

        if not os.path.exists('images/dcgan_images'):
            os.makedirs('images/dcgan_images')
            
        if (epoch + 1) % 50 == 0:
            # Calculate FID and IS
            is_avg, is_std = calculate_inception_score(generated_images)
            fid = calculate_fid(generated_images)
            
            # Append metrics to lists
            self.fid_scores.append(fid)
            self.is_scores.append((is_avg, is_std))
            
            print(f'Epoch {epoch + 1}: Average (IS): {is_avg} | Std (IS): {is_std} | FID Score: {fid}')

        if (epoch + 1) % 10 == 0:
            if not os.path.exists(f'modelweights/dcgan/epoch_{epoch + 1}'):
                os.makedirs(f'modelweights/dcgan/epoch_{epoch + 1}')
                self.model.generator.save_weights(f'modelweights/dcgan/epoch_{epoch + 1}/generator_weights_epoch_{epoch + 1}.h5')
                self.model.discriminator.save_weights(f'modelweights/dcgan/epoch_{epoch + 1}/discriminator_weights_epoch_{epoch + 1}.h5')
                print(f'\nSaving Model Weights at Epoch {epoch + 1}.\n')

            fig, axes = plt.subplots(10, 10, figsize=(20, 20))
            axes = axes.flatten()

            for i, ax in enumerate(axes):
                ax.imshow(generated_images[i])
                ax.axis('off')

            plt.tight_layout()
            plt.savefig(f'images/dcgan_images/generated_img_{epoch + 1}.png')
            plt.close()
In [73]:
EPOCHS = 200
LATENT_DIM = 100    
LEARNING_RATE_D = 0.0002
LEARNING_RATE_G = 0.0002
BETA_1 = 0.5
LABEL_SMOOTHING = 0.1

callbacks = [GANMonitor(LATENT_DIM)]

generator = create_generator(LATENT_DIM)
discriminator = create_discriminator()
dcgan = DCGAN(generator, discriminator, latent_dim=LATENT_DIM)
dcgan.compile(
    g_optimizer=Adam(learning_rate=LEARNING_RATE_G, beta_1=BETA_1),
    d_optimizer=Adam(learning_rate=LEARNING_RATE_D, beta_1=BETA_1),
    loss_fn=BinaryCrossentropy(label_smoothing=LABEL_SMOOTHING)
)
In [74]:
history = dcgan.fit(dataset, epochs=EPOCHS, callbacks=callbacks, use_multiprocessing=True)
Epoch 1/200
391/391 [==============================] - 19s 45ms/step - g_loss: 0.7754 - d_real_loss: 0.6342 - d_fake_loss: 0.7219 - d_acc: 0.6260 - kl_divergence: 4.8469
Epoch 2/200
391/391 [==============================] - 18s 45ms/step - g_loss: 0.7565 - d_real_loss: 0.6964 - d_fake_loss: 0.6760 - d_acc: 0.4744 - kl_divergence: 5.2824
Epoch 3/200
391/391 [==============================] - 18s 45ms/step - g_loss: 0.7146 - d_real_loss: 0.6953 - d_fake_loss: 0.6935 - d_acc: 0.4948 - kl_divergence: 5.3343
Epoch 4/200
391/391 [==============================] - 18s 45ms/step - g_loss: 0.7083 - d_real_loss: 0.6980 - d_fake_loss: 0.6885 - d_acc: 0.4523 - kl_divergence: 5.1964
Epoch 5/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7082 - d_real_loss: 0.6968 - d_fake_loss: 0.6906 - d_acc: 0.4417 - kl_divergence: 5.1099
Epoch 6/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7054 - d_real_loss: 0.6941 - d_fake_loss: 0.6921 - d_acc: 0.4398 - kl_divergence: 5.0769
Epoch 7/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7052 - d_real_loss: 0.6961 - d_fake_loss: 0.6924 - d_acc: 0.4263 - kl_divergence: 5.0406
Epoch 8/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7034 - d_real_loss: 0.6968 - d_fake_loss: 0.6914 - d_acc: 0.4554 - kl_divergence: 5.0098
Epoch 9/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7005 - d_real_loss: 0.6934 - d_fake_loss: 0.6953 - d_acc: 0.4782 - kl_divergence: 4.9887
Epoch 10/200
391/391 [==============================] - 23s 59ms/step - g_loss: 0.7024 - d_real_loss: 0.6928 - d_fake_loss: 0.6936 - d_acc: 0.4778 - kl_divergence: 4.9642
Epoch 11/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7047 - d_real_loss: 0.6946 - d_fake_loss: 0.6945 - d_acc: 0.4450 - kl_divergence: 4.9562
Epoch 12/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7090 - d_real_loss: 0.6955 - d_fake_loss: 0.6911 - d_acc: 0.4654 - kl_divergence: 4.9578
Epoch 13/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7145 - d_real_loss: 0.6938 - d_fake_loss: 0.6908 - d_acc: 0.4593 - kl_divergence: 4.9494
Epoch 14/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7115 - d_real_loss: 0.6923 - d_fake_loss: 0.6934 - d_acc: 0.4703 - kl_divergence: 4.9386
Epoch 15/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7212 - d_real_loss: 0.6922 - d_fake_loss: 0.6886 - d_acc: 0.4405 - kl_divergence: 4.9372
Epoch 16/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7350 - d_real_loss: 0.6924 - d_fake_loss: 0.6853 - d_acc: 0.4581 - kl_divergence: 4.9414
Epoch 17/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7606 - d_real_loss: 0.6878 - d_fake_loss: 0.6772 - d_acc: 0.4672 - kl_divergence: 4.9471
Epoch 18/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7580 - d_real_loss: 0.6868 - d_fake_loss: 0.6777 - d_acc: 0.4717 - kl_divergence: 4.9517
Epoch 19/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7549 - d_real_loss: 0.6892 - d_fake_loss: 0.6759 - d_acc: 0.4503 - kl_divergence: 4.9504
Epoch 20/200
391/391 [==============================] - 23s 59ms/step - g_loss: 0.7392 - d_real_loss: 0.6875 - d_fake_loss: 0.6811 - d_acc: 0.4448 - kl_divergence: 4.9511
Epoch 21/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7527 - d_real_loss: 0.6892 - d_fake_loss: 0.6785 - d_acc: 0.4392 - kl_divergence: 4.9494
Epoch 22/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7596 - d_real_loss: 0.6897 - d_fake_loss: 0.6706 - d_acc: 0.4200 - kl_divergence: 4.9476
Epoch 23/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7292 - d_real_loss: 0.6902 - d_fake_loss: 0.6823 - d_acc: 0.4195 - kl_divergence: 4.9444
Epoch 24/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7163 - d_real_loss: 0.6907 - d_fake_loss: 0.6883 - d_acc: 0.4083 - kl_divergence: 4.9399
Epoch 25/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7004 - d_real_loss: 0.6928 - d_fake_loss: 0.6921 - d_acc: 0.4086 - kl_divergence: 4.9372
Epoch 26/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.6997 - d_real_loss: 0.6937 - d_fake_loss: 0.6929 - d_acc: 0.4091 - kl_divergence: 4.9369
Epoch 27/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.6998 - d_real_loss: 0.6930 - d_fake_loss: 0.6929 - d_acc: 0.4159 - kl_divergence: 4.9352
Epoch 28/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.6993 - d_real_loss: 0.6925 - d_fake_loss: 0.6943 - d_acc: 0.4142 - kl_divergence: 4.9337
Epoch 29/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.6976 - d_real_loss: 0.6928 - d_fake_loss: 0.6937 - d_acc: 0.4266 - kl_divergence: 4.9327
Epoch 30/200
391/391 [==============================] - 23s 59ms/step - g_loss: 0.6988 - d_real_loss: 0.6928 - d_fake_loss: 0.6934 - d_acc: 0.4174 - kl_divergence: 4.9329
Epoch 31/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.6975 - d_real_loss: 0.6922 - d_fake_loss: 0.6947 - d_acc: 0.4309 - kl_divergence: 4.9325
Epoch 32/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.6992 - d_real_loss: 0.6939 - d_fake_loss: 0.6938 - d_acc: 0.4249 - kl_divergence: 4.9319
Epoch 33/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.6987 - d_real_loss: 0.6934 - d_fake_loss: 0.6939 - d_acc: 0.4340 - kl_divergence: 4.9331
Epoch 34/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.6995 - d_real_loss: 0.6935 - d_fake_loss: 0.6929 - d_acc: 0.4076 - kl_divergence: 4.9339
Epoch 35/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.6991 - d_real_loss: 0.6930 - d_fake_loss: 0.6934 - d_acc: 0.4349 - kl_divergence: 4.9349
Epoch 36/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.6994 - d_real_loss: 0.6932 - d_fake_loss: 0.6922 - d_acc: 0.4282 - kl_divergence: 4.9351
Epoch 37/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7018 - d_real_loss: 0.6944 - d_fake_loss: 0.6925 - d_acc: 0.4075 - kl_divergence: 4.9337
Epoch 38/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7000 - d_real_loss: 0.6924 - d_fake_loss: 0.6935 - d_acc: 0.4419 - kl_divergence: 4.9332
Epoch 39/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7007 - d_real_loss: 0.6929 - d_fake_loss: 0.6928 - d_acc: 0.4290 - kl_divergence: 4.9340
Epoch 40/200
391/391 [==============================] - 23s 59ms/step - g_loss: 0.7008 - d_real_loss: 0.6930 - d_fake_loss: 0.6921 - d_acc: 0.4405 - kl_divergence: 4.9337
Epoch 41/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7015 - d_real_loss: 0.6927 - d_fake_loss: 0.6934 - d_acc: 0.4437 - kl_divergence: 4.9326
Epoch 42/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7021 - d_real_loss: 0.6927 - d_fake_loss: 0.6918 - d_acc: 0.4526 - kl_divergence: 4.9313
Epoch 43/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7013 - d_real_loss: 0.6928 - d_fake_loss: 0.6932 - d_acc: 0.4492 - kl_divergence: 4.9308
Epoch 44/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7019 - d_real_loss: 0.6928 - d_fake_loss: 0.6925 - d_acc: 0.4481 - kl_divergence: 4.9302
Epoch 45/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7022 - d_real_loss: 0.6928 - d_fake_loss: 0.6917 - d_acc: 0.4413 - kl_divergence: 4.9298
Epoch 46/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7023 - d_real_loss: 0.6917 - d_fake_loss: 0.6922 - d_acc: 0.4534 - kl_divergence: 4.9293
Epoch 47/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7034 - d_real_loss: 0.6929 - d_fake_loss: 0.6911 - d_acc: 0.4427 - kl_divergence: 4.9285
Epoch 48/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7034 - d_real_loss: 0.6924 - d_fake_loss: 0.6910 - d_acc: 0.4395 - kl_divergence: 4.9278
Epoch 49/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7033 - d_real_loss: 0.6921 - d_fake_loss: 0.6919 - d_acc: 0.4395 - kl_divergence: 4.9266
Epoch 50/200
1/1 [==============================] - 1s 937ms/step g_loss: 0.7046 - d_real_loss: 0.6914 - d_fake_loss: 0.6910 - d_acc: 0.4477 - kl_divergence: 4.92
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
4/4 [==============================] - 1s 125ms/step
4/4 [==============================] - 0s 31ms/step
Epoch 50: Average (IS): 2.8625476360321045 | Std (IS): 0.38755324482917786 | FID Score: 241.99337455709366
391/391 [==============================] - 42s 109ms/step - g_loss: 0.7046 - d_real_loss: 0.6914 - d_fake_loss: 0.6911 - d_acc: 0.4479 - kl_divergence: 4.9252
Epoch 51/200
391/391 [==============================] - 17s 43ms/step - g_loss: 0.7050 - d_real_loss: 0.6918 - d_fake_loss: 0.6916 - d_acc: 0.4447 - kl_divergence: 4.9245
Epoch 52/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7052 - d_real_loss: 0.6922 - d_fake_loss: 0.6899 - d_acc: 0.4443 - kl_divergence: 4.9241
Epoch 53/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7055 - d_real_loss: 0.6908 - d_fake_loss: 0.6919 - d_acc: 0.4605 - kl_divergence: 4.9240
Epoch 54/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7063 - d_real_loss: 0.6925 - d_fake_loss: 0.6904 - d_acc: 0.4409 - kl_divergence: 4.9240
Epoch 55/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7061 - d_real_loss: 0.6914 - d_fake_loss: 0.6901 - d_acc: 0.4464 - kl_divergence: 4.9232
Epoch 56/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7063 - d_real_loss: 0.6903 - d_fake_loss: 0.6912 - d_acc: 0.4506 - kl_divergence: 4.9225
Epoch 57/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7075 - d_real_loss: 0.6915 - d_fake_loss: 0.6898 - d_acc: 0.4558 - kl_divergence: 4.9220
Epoch 58/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7080 - d_real_loss: 0.6907 - d_fake_loss: 0.6900 - d_acc: 0.4509 - kl_divergence: 4.9214
Epoch 59/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7107 - d_real_loss: 0.6918 - d_fake_loss: 0.6877 - d_acc: 0.4420 - kl_divergence: 4.9212
Epoch 60/200
391/391 [==============================] - 23s 59ms/step - g_loss: 0.7102 - d_real_loss: 0.6903 - d_fake_loss: 0.6885 - d_acc: 0.4546 - kl_divergence: 4.9210
Epoch 61/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7113 - d_real_loss: 0.6899 - d_fake_loss: 0.6872 - d_acc: 0.4534 - kl_divergence: 4.9203
Epoch 62/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7131 - d_real_loss: 0.6891 - d_fake_loss: 0.6877 - d_acc: 0.4548 - kl_divergence: 4.9202
Epoch 63/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7142 - d_real_loss: 0.6889 - d_fake_loss: 0.6863 - d_acc: 0.4593 - kl_divergence: 4.9200
Epoch 64/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7153 - d_real_loss: 0.6878 - d_fake_loss: 0.6873 - d_acc: 0.4567 - kl_divergence: 4.9200
Epoch 65/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7173 - d_real_loss: 0.6892 - d_fake_loss: 0.6844 - d_acc: 0.4506 - kl_divergence: 4.9194
Epoch 66/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7184 - d_real_loss: 0.6878 - d_fake_loss: 0.6855 - d_acc: 0.4588 - kl_divergence: 4.9192
Epoch 67/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7191 - d_real_loss: 0.6872 - d_fake_loss: 0.6835 - d_acc: 0.4677 - kl_divergence: 4.9189
Epoch 68/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7211 - d_real_loss: 0.6869 - d_fake_loss: 0.6837 - d_acc: 0.4681 - kl_divergence: 4.9189
Epoch 69/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7229 - d_real_loss: 0.6853 - d_fake_loss: 0.6830 - d_acc: 0.4688 - kl_divergence: 4.9190
Epoch 70/200
391/391 [==============================] - 24s 61ms/step - g_loss: 0.7262 - d_real_loss: 0.6845 - d_fake_loss: 0.6807 - d_acc: 0.4733 - kl_divergence: 4.9186
Epoch 71/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7279 - d_real_loss: 0.6847 - d_fake_loss: 0.6799 - d_acc: 0.4712 - kl_divergence: 4.9187
Epoch 72/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7307 - d_real_loss: 0.6836 - d_fake_loss: 0.6788 - d_acc: 0.4709 - kl_divergence: 4.9186
Epoch 73/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7331 - d_real_loss: 0.6823 - d_fake_loss: 0.6777 - d_acc: 0.4758 - kl_divergence: 4.9187
Epoch 74/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7350 - d_real_loss: 0.6826 - d_fake_loss: 0.6774 - d_acc: 0.4796 - kl_divergence: 4.9185
Epoch 75/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7364 - d_real_loss: 0.6802 - d_fake_loss: 0.6742 - d_acc: 0.4809 - kl_divergence: 4.9184
Epoch 76/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7423 - d_real_loss: 0.6802 - d_fake_loss: 0.6744 - d_acc: 0.4756 - kl_divergence: 4.9189
Epoch 77/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7424 - d_real_loss: 0.6804 - d_fake_loss: 0.6728 - d_acc: 0.4791 - kl_divergence: 4.9191
Epoch 78/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7456 - d_real_loss: 0.6778 - d_fake_loss: 0.6721 - d_acc: 0.4841 - kl_divergence: 4.9196
Epoch 79/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7468 - d_real_loss: 0.6793 - d_fake_loss: 0.6704 - d_acc: 0.4820 - kl_divergence: 4.9198
Epoch 80/200
391/391 [==============================] - 23s 59ms/step - g_loss: 0.7493 - d_real_loss: 0.6768 - d_fake_loss: 0.6702 - d_acc: 0.4868 - kl_divergence: 4.9199
Epoch 81/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7537 - d_real_loss: 0.6765 - d_fake_loss: 0.6690 - d_acc: 0.4876 - kl_divergence: 4.9203
Epoch 82/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7549 - d_real_loss: 0.6760 - d_fake_loss: 0.6667 - d_acc: 0.4879 - kl_divergence: 4.9206
Epoch 83/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7570 - d_real_loss: 0.6755 - d_fake_loss: 0.6673 - d_acc: 0.4891 - kl_divergence: 4.9206
Epoch 84/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7619 - d_real_loss: 0.6727 - d_fake_loss: 0.6632 - d_acc: 0.4985 - kl_divergence: 4.9208
Epoch 85/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7611 - d_real_loss: 0.6736 - d_fake_loss: 0.6646 - d_acc: 0.4930 - kl_divergence: 4.9212
Epoch 86/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7642 - d_real_loss: 0.6717 - d_fake_loss: 0.6613 - d_acc: 0.5014 - kl_divergence: 4.9213
Epoch 87/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7678 - d_real_loss: 0.6715 - d_fake_loss: 0.6599 - d_acc: 0.5003 - kl_divergence: 4.9216
Epoch 88/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7705 - d_real_loss: 0.6704 - d_fake_loss: 0.6595 - d_acc: 0.5012 - kl_divergence: 4.9218
Epoch 89/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7738 - d_real_loss: 0.6704 - d_fake_loss: 0.6591 - d_acc: 0.5044 - kl_divergence: 4.9219
Epoch 90/200
391/391 [==============================] - 23s 59ms/step - g_loss: 0.7760 - d_real_loss: 0.6707 - d_fake_loss: 0.6598 - d_acc: 0.5009 - kl_divergence: 4.9224
Epoch 91/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7753 - d_real_loss: 0.6677 - d_fake_loss: 0.6560 - d_acc: 0.5082 - kl_divergence: 4.9231
Epoch 92/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7805 - d_real_loss: 0.6683 - d_fake_loss: 0.6561 - d_acc: 0.5107 - kl_divergence: 4.9233
Epoch 93/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7810 - d_real_loss: 0.6680 - d_fake_loss: 0.6559 - d_acc: 0.5137 - kl_divergence: 4.9235
Epoch 94/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7804 - d_real_loss: 0.6662 - d_fake_loss: 0.6538 - d_acc: 0.5162 - kl_divergence: 4.9240
Epoch 95/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7862 - d_real_loss: 0.6656 - d_fake_loss: 0.6530 - d_acc: 0.5136 - kl_divergence: 4.9247
Epoch 96/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7881 - d_real_loss: 0.6651 - d_fake_loss: 0.6512 - d_acc: 0.5161 - kl_divergence: 4.9255
Epoch 97/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7923 - d_real_loss: 0.6637 - d_fake_loss: 0.6506 - d_acc: 0.5181 - kl_divergence: 4.9263
Epoch 98/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7924 - d_real_loss: 0.6649 - d_fake_loss: 0.6521 - d_acc: 0.5201 - kl_divergence: 4.9268
Epoch 99/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7961 - d_real_loss: 0.6631 - d_fake_loss: 0.6473 - d_acc: 0.5175 - kl_divergence: 4.9273
Epoch 100/200
1/1 [==============================] - 1s 906ms/step g_loss: 0.7971 - d_real_loss: 0.6617 - d_fake_loss: 0.6476 - d_acc: 0.5256 - kl_divergence: 4.92
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 47ms/step
4/4 [==============================] - 1s 135ms/step
4/4 [==============================] - 0s 31ms/step
Epoch 100: Average (IS): 3.357959032058716 | Std (IS): 0.44014039635658264 | FID Score: 216.90137772602665
391/391 [==============================] - 40s 102ms/step - g_loss: 0.7970 - d_real_loss: 0.6619 - d_fake_loss: 0.6475 - d_acc: 0.5253 - kl_divergence: 4.9279
Epoch 101/200
391/391 [==============================] - 17s 43ms/step - g_loss: 0.8001 - d_real_loss: 0.6609 - d_fake_loss: 0.6463 - d_acc: 0.5267 - kl_divergence: 4.9286
Epoch 102/200
391/391 [==============================] - 17s 43ms/step - g_loss: 0.8038 - d_real_loss: 0.6603 - d_fake_loss: 0.6447 - d_acc: 0.5262 - kl_divergence: 4.9293
Epoch 103/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8061 - d_real_loss: 0.6599 - d_fake_loss: 0.6440 - d_acc: 0.5280 - kl_divergence: 4.9301
Epoch 104/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8062 - d_real_loss: 0.6602 - d_fake_loss: 0.6456 - d_acc: 0.5289 - kl_divergence: 4.9308
Epoch 105/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8096 - d_real_loss: 0.6579 - d_fake_loss: 0.6423 - d_acc: 0.5319 - kl_divergence: 4.9316
Epoch 106/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8166 - d_real_loss: 0.6582 - d_fake_loss: 0.6407 - d_acc: 0.5324 - kl_divergence: 4.9322
Epoch 107/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8134 - d_real_loss: 0.6578 - d_fake_loss: 0.6403 - d_acc: 0.5343 - kl_divergence: 4.9330
Epoch 108/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8160 - d_real_loss: 0.6553 - d_fake_loss: 0.6388 - d_acc: 0.5370 - kl_divergence: 4.9339
Epoch 109/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8231 - d_real_loss: 0.6548 - d_fake_loss: 0.6362 - d_acc: 0.5367 - kl_divergence: 4.9349
Epoch 110/200
391/391 [==============================] - 23s 59ms/step - g_loss: 0.8225 - d_real_loss: 0.6553 - d_fake_loss: 0.6359 - d_acc: 0.5376 - kl_divergence: 4.9356
Epoch 111/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8235 - d_real_loss: 0.6534 - d_fake_loss: 0.6359 - d_acc: 0.5415 - kl_divergence: 4.9364
Epoch 112/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8294 - d_real_loss: 0.6536 - d_fake_loss: 0.6348 - d_acc: 0.5437 - kl_divergence: 4.9372
Epoch 113/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8307 - d_real_loss: 0.6508 - d_fake_loss: 0.6304 - d_acc: 0.5452 - kl_divergence: 4.9380
Epoch 114/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8360 - d_real_loss: 0.6495 - d_fake_loss: 0.6293 - d_acc: 0.5505 - kl_divergence: 4.9389
Epoch 115/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8375 - d_real_loss: 0.6503 - d_fake_loss: 0.6306 - d_acc: 0.5480 - kl_divergence: 4.9399
Epoch 116/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8459 - d_real_loss: 0.6481 - d_fake_loss: 0.6265 - d_acc: 0.5525 - kl_divergence: 4.9409
Epoch 117/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8458 - d_real_loss: 0.6489 - d_fake_loss: 0.6270 - d_acc: 0.5518 - kl_divergence: 4.9416
Epoch 118/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8458 - d_real_loss: 0.6469 - d_fake_loss: 0.6239 - d_acc: 0.5538 - kl_divergence: 4.9425
Epoch 119/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8505 - d_real_loss: 0.6462 - d_fake_loss: 0.6234 - d_acc: 0.5553 - kl_divergence: 4.9433
Epoch 120/200
391/391 [==============================] - 23s 59ms/step - g_loss: 0.8546 - d_real_loss: 0.6455 - d_fake_loss: 0.6221 - d_acc: 0.5575 - kl_divergence: 4.9442
Epoch 121/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8390 - d_real_loss: 0.6688 - d_fake_loss: 0.6552 - d_acc: 0.5412 - kl_divergence: 4.9450
Epoch 122/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8318 - d_real_loss: 0.6489 - d_fake_loss: 0.6290 - d_acc: 0.5515 - kl_divergence: 4.9457
Epoch 123/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8550 - d_real_loss: 0.6419 - d_fake_loss: 0.6179 - d_acc: 0.5627 - kl_divergence: 4.9464
Epoch 124/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8622 - d_real_loss: 0.6414 - d_fake_loss: 0.6164 - d_acc: 0.5642 - kl_divergence: 4.9473
Epoch 125/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8630 - d_real_loss: 0.6419 - d_fake_loss: 0.6179 - d_acc: 0.5644 - kl_divergence: 4.9481
Epoch 126/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8667 - d_real_loss: 0.6394 - d_fake_loss: 0.6131 - d_acc: 0.5664 - kl_divergence: 4.9491
Epoch 127/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8740 - d_real_loss: 0.6401 - d_fake_loss: 0.6142 - d_acc: 0.5678 - kl_divergence: 4.9500
Epoch 128/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8749 - d_real_loss: 0.6385 - d_fake_loss: 0.6114 - d_acc: 0.5672 - kl_divergence: 4.9509
Epoch 129/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8801 - d_real_loss: 0.6411 - d_fake_loss: 0.6133 - d_acc: 0.5675 - kl_divergence: 4.9517
Epoch 130/200
391/391 [==============================] - 24s 62ms/step - g_loss: 0.8822 - d_real_loss: 0.6379 - d_fake_loss: 0.6104 - d_acc: 0.5724 - kl_divergence: 4.9526
Epoch 131/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8729 - d_real_loss: 0.6423 - d_fake_loss: 0.6165 - d_acc: 0.5689 - kl_divergence: 4.9533
Epoch 132/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8844 - d_real_loss: 0.6376 - d_fake_loss: 0.6107 - d_acc: 0.5758 - kl_divergence: 4.9540
Epoch 133/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8887 - d_real_loss: 0.6340 - d_fake_loss: 0.6066 - d_acc: 0.5790 - kl_divergence: 4.9546
Epoch 134/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8889 - d_real_loss: 0.6336 - d_fake_loss: 0.6055 - d_acc: 0.5833 - kl_divergence: 4.9555
Epoch 135/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8960 - d_real_loss: 0.6332 - d_fake_loss: 0.6036 - d_acc: 0.5821 - kl_divergence: 4.9565
Epoch 136/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9018 - d_real_loss: 0.6326 - d_fake_loss: 0.6033 - d_acc: 0.5826 - kl_divergence: 4.9573
Epoch 137/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9013 - d_real_loss: 0.6315 - d_fake_loss: 0.6018 - d_acc: 0.5836 - kl_divergence: 4.9583
Epoch 138/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9061 - d_real_loss: 0.6337 - d_fake_loss: 0.6032 - d_acc: 0.5851 - kl_divergence: 4.9593
Epoch 139/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9088 - d_real_loss: 0.6302 - d_fake_loss: 0.5993 - d_acc: 0.5856 - kl_divergence: 4.9600
Epoch 140/200
391/391 [==============================] - 23s 59ms/step - g_loss: 0.9114 - d_real_loss: 0.6293 - d_fake_loss: 0.5987 - d_acc: 0.5876 - kl_divergence: 4.9610
Epoch 141/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9196 - d_real_loss: 0.6289 - d_fake_loss: 0.5977 - d_acc: 0.5901 - kl_divergence: 4.9619
Epoch 142/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9201 - d_real_loss: 0.6259 - d_fake_loss: 0.5930 - d_acc: 0.5942 - kl_divergence: 4.9629
Epoch 143/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9203 - d_real_loss: 0.6267 - d_fake_loss: 0.5950 - d_acc: 0.5921 - kl_divergence: 4.9639
Epoch 144/200
391/391 [==============================] - 17s 44ms/step - g_loss: 1.0015 - d_real_loss: 0.7647 - d_fake_loss: 0.8218 - d_acc: 0.5477 - kl_divergence: 4.9648
Epoch 145/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.7463 - d_real_loss: 0.6960 - d_fake_loss: 0.6891 - d_acc: 0.5033 - kl_divergence: 4.9647
Epoch 146/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8338 - d_real_loss: 0.6517 - d_fake_loss: 0.6284 - d_acc: 0.5553 - kl_divergence: 4.9648
Epoch 147/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8760 - d_real_loss: 0.6320 - d_fake_loss: 0.6058 - d_acc: 0.5835 - kl_divergence: 4.9656
Epoch 148/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8950 - d_real_loss: 0.6252 - d_fake_loss: 0.5975 - d_acc: 0.5922 - kl_divergence: 4.9663
Epoch 149/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9045 - d_real_loss: 0.6241 - d_fake_loss: 0.5933 - d_acc: 0.5941 - kl_divergence: 4.9670
Epoch 150/200
1/1 [==============================] - 1s 922ms/step g_loss: 0.9180 - d_real_loss: 0.6240 - d_fake_loss: 0.5919 - d_acc: 0.5950 - kl_divergence: 4.96
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 31ms/step
4/4 [==============================] - 1s 115ms/step
4/4 [==============================] - 0s 36ms/step
Epoch 150: Average (IS): 3.192033290863037 | Std (IS): 0.36593374609947205 | FID Score: 222.56291901186495
391/391 [==============================] - 39s 99ms/step - g_loss: 0.9177 - d_real_loss: 0.6239 - d_fake_loss: 0.5919 - d_acc: 0.5950 - kl_divergence: 4.9680
Epoch 151/200
391/391 [==============================] - 17s 43ms/step - g_loss: 0.9175 - d_real_loss: 0.6233 - d_fake_loss: 0.5916 - d_acc: 0.5970 - kl_divergence: 4.9688
Epoch 152/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9258 - d_real_loss: 0.6225 - d_fake_loss: 0.5901 - d_acc: 0.5981 - kl_divergence: 4.9696
Epoch 153/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9267 - d_real_loss: 0.6246 - d_fake_loss: 0.5909 - d_acc: 0.5980 - kl_divergence: 4.9704
Epoch 154/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9276 - d_real_loss: 0.6227 - d_fake_loss: 0.5886 - d_acc: 0.5988 - kl_divergence: 4.9713
Epoch 155/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9311 - d_real_loss: 0.6229 - d_fake_loss: 0.5896 - d_acc: 0.5996 - kl_divergence: 4.9722
Epoch 156/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.8564 - d_real_loss: 0.6908 - d_fake_loss: 0.6905 - d_acc: 0.5403 - kl_divergence: 4.9731
Epoch 157/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9107 - d_real_loss: 0.6223 - d_fake_loss: 0.5925 - d_acc: 0.5995 - kl_divergence: 4.9738
Epoch 158/200
391/391 [==============================] - 18s 45ms/step - g_loss: 0.9287 - d_real_loss: 0.6190 - d_fake_loss: 0.5873 - d_acc: 0.6048 - kl_divergence: 4.9744
Epoch 159/200
391/391 [==============================] - 18s 45ms/step - g_loss: 0.9376 - d_real_loss: 0.6183 - d_fake_loss: 0.5837 - d_acc: 0.6082 - kl_divergence: 4.9752
Epoch 160/200
391/391 [==============================] - 24s 60ms/step - g_loss: 0.9376 - d_real_loss: 0.6197 - d_fake_loss: 0.5844 - d_acc: 0.6039 - kl_divergence: 4.9762
Epoch 161/200
391/391 [==============================] - 18s 45ms/step - g_loss: 0.9433 - d_real_loss: 0.6195 - d_fake_loss: 0.5851 - d_acc: 0.6058 - kl_divergence: 4.9770
Epoch 162/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9455 - d_real_loss: 0.6202 - d_fake_loss: 0.5834 - d_acc: 0.6069 - kl_divergence: 4.9779
Epoch 163/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9520 - d_real_loss: 0.6173 - d_fake_loss: 0.5801 - d_acc: 0.6116 - kl_divergence: 4.9786
Epoch 164/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9516 - d_real_loss: 0.6188 - d_fake_loss: 0.5820 - d_acc: 0.6073 - kl_divergence: 4.9793
Epoch 165/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9549 - d_real_loss: 0.6183 - d_fake_loss: 0.5827 - d_acc: 0.6088 - kl_divergence: 4.9802
Epoch 166/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9568 - d_real_loss: 0.6155 - d_fake_loss: 0.5783 - d_acc: 0.6106 - kl_divergence: 4.9811
Epoch 167/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9602 - d_real_loss: 0.6150 - d_fake_loss: 0.5780 - d_acc: 0.6158 - kl_divergence: 4.9820
Epoch 168/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9595 - d_real_loss: 0.6201 - d_fake_loss: 0.5853 - d_acc: 0.6106 - kl_divergence: 4.9830
Epoch 169/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9690 - d_real_loss: 0.6122 - d_fake_loss: 0.5739 - d_acc: 0.6193 - kl_divergence: 4.9838
Epoch 170/200
391/391 [==============================] - 24s 62ms/step - g_loss: 0.9720 - d_real_loss: 0.6142 - d_fake_loss: 0.5758 - d_acc: 0.6183 - kl_divergence: 4.9847
Epoch 171/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9757 - d_real_loss: 0.6123 - d_fake_loss: 0.5727 - d_acc: 0.6205 - kl_divergence: 4.9857
Epoch 172/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9789 - d_real_loss: 0.6098 - d_fake_loss: 0.5710 - d_acc: 0.6234 - kl_divergence: 4.9867
Epoch 173/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9767 - d_real_loss: 0.6155 - d_fake_loss: 0.5781 - d_acc: 0.6167 - kl_divergence: 4.9876
Epoch 174/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9805 - d_real_loss: 0.6072 - d_fake_loss: 0.5670 - d_acc: 0.6279 - kl_divergence: 4.9886
Epoch 175/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9905 - d_real_loss: 0.6059 - d_fake_loss: 0.5656 - d_acc: 0.6279 - kl_divergence: 4.9895
Epoch 176/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9964 - d_real_loss: 0.6078 - d_fake_loss: 0.5654 - d_acc: 0.6265 - kl_divergence: 4.9905
Epoch 177/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9682 - d_real_loss: 0.6334 - d_fake_loss: 0.5994 - d_acc: 0.6059 - kl_divergence: 4.9915
Epoch 178/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9915 - d_real_loss: 0.6016 - d_fake_loss: 0.5628 - d_acc: 0.6323 - kl_divergence: 4.9923
Epoch 179/200
391/391 [==============================] - 17s 44ms/step - g_loss: 1.0007 - d_real_loss: 0.6043 - d_fake_loss: 0.5614 - d_acc: 0.6300 - kl_divergence: 4.9931
Epoch 180/200
391/391 [==============================] - 23s 59ms/step - g_loss: 1.0045 - d_real_loss: 0.6024 - d_fake_loss: 0.5600 - d_acc: 0.6320 - kl_divergence: 4.9940
Epoch 181/200
391/391 [==============================] - 17s 44ms/step - g_loss: 1.0085 - d_real_loss: 0.6012 - d_fake_loss: 0.5585 - d_acc: 0.6364 - kl_divergence: 4.9951
Epoch 182/200
391/391 [==============================] - 17s 44ms/step - g_loss: 1.0089 - d_real_loss: 0.6045 - d_fake_loss: 0.5620 - d_acc: 0.6325 - kl_divergence: 4.9961
Epoch 183/200
391/391 [==============================] - 17s 44ms/step - g_loss: 1.0157 - d_real_loss: 0.6019 - d_fake_loss: 0.5586 - d_acc: 0.6341 - kl_divergence: 4.9971
Epoch 184/200
391/391 [==============================] - 17s 44ms/step - g_loss: 1.0058 - d_real_loss: 0.6333 - d_fake_loss: 0.5952 - d_acc: 0.6154 - kl_divergence: 4.9980
Epoch 185/200
391/391 [==============================] - 17s 44ms/step - g_loss: 0.9748 - d_real_loss: 0.6089 - d_fake_loss: 0.5718 - d_acc: 0.6258 - kl_divergence: 4.9987
Epoch 186/200
391/391 [==============================] - 17s 44ms/step - g_loss: 1.0127 - d_real_loss: 0.5981 - d_fake_loss: 0.5552 - d_acc: 0.6384 - kl_divergence: 4.9994
Epoch 187/200
391/391 [==============================] - 17s 44ms/step - g_loss: 1.0219 - d_real_loss: 0.5979 - d_fake_loss: 0.5567 - d_acc: 0.6380 - kl_divergence: 5.0003
Epoch 188/200
391/391 [==============================] - 17s 44ms/step - g_loss: 1.0263 - d_real_loss: 0.5973 - d_fake_loss: 0.5519 - d_acc: 0.6418 - kl_divergence: 5.0014
Epoch 189/200
391/391 [==============================] - 17s 44ms/step - g_loss: 1.0233 - d_real_loss: 0.5971 - d_fake_loss: 0.5538 - d_acc: 0.6423 - kl_divergence: 5.0024
Epoch 190/200
391/391 [==============================] - 23s 59ms/step - g_loss: 1.0287 - d_real_loss: 0.5974 - d_fake_loss: 0.5525 - d_acc: 0.6424 - kl_divergence: 5.0034
Epoch 191/200
391/391 [==============================] - 17s 44ms/step - g_loss: 1.0327 - d_real_loss: 0.5974 - d_fake_loss: 0.5513 - d_acc: 0.6426 - kl_divergence: 5.0043
Epoch 192/200
391/391 [==============================] - 17s 44ms/step - g_loss: 1.0413 - d_real_loss: 0.5968 - d_fake_loss: 0.5498 - d_acc: 0.6440 - kl_divergence: 5.0052
Epoch 193/200
391/391 [==============================] - 17s 44ms/step - g_loss: 1.0405 - d_real_loss: 0.5946 - d_fake_loss: 0.5486 - d_acc: 0.6473 - kl_divergence: 5.0062
Epoch 194/200
391/391 [==============================] - 17s 44ms/step - g_loss: 1.0237 - d_real_loss: 0.6067 - d_fake_loss: 0.5671 - d_acc: 0.6354 - kl_divergence: 5.0072
Epoch 195/200
391/391 [==============================] - 17s 44ms/step - g_loss: 1.0412 - d_real_loss: 0.5927 - d_fake_loss: 0.5469 - d_acc: 0.6476 - kl_divergence: 5.0081
Epoch 196/200
391/391 [==============================] - 17s 44ms/step - g_loss: 1.0436 - d_real_loss: 0.5928 - d_fake_loss: 0.5487 - d_acc: 0.6521 - kl_divergence: 5.0089
Epoch 197/200
391/391 [==============================] - 17s 44ms/step - g_loss: 1.0444 - d_real_loss: 0.5956 - d_fake_loss: 0.5505 - d_acc: 0.6484 - kl_divergence: 5.0098
Epoch 198/200
391/391 [==============================] - 17s 44ms/step - g_loss: 1.0532 - d_real_loss: 0.5936 - d_fake_loss: 0.5459 - d_acc: 0.6493 - kl_divergence: 5.0108
Epoch 199/200
391/391 [==============================] - 17s 44ms/step - g_loss: 1.0537 - d_real_loss: 0.5892 - d_fake_loss: 0.5426 - d_acc: 0.6551 - kl_divergence: 5.0117
Epoch 200/200
1/1 [==============================] - 1s 937ms/step g_loss: 1.0187 - d_real_loss: 0.6221 - d_fake_loss: 0.5827 - d_acc: 0.6226 - kl_divergence: 5.01
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 31ms/step
4/4 [==============================] - 1s 104ms/step
4/4 [==============================] - 0s 31ms/step
Epoch 200: Average (IS): 3.2198996543884277 | Std (IS): 0.4002697765827179 | FID Score: 216.87899884034704
391/391 [==============================] - 39s 100ms/step - g_loss: 1.0187 - d_real_loss: 0.6221 - d_fake_loss: 0.5827 - d_acc: 0.6226 - kl_divergence: 5.0125

DISPLAYING BEST FID AND INCEPTION SCORES FOR DCGAN

  • We can see that the FID and Inception Scores are not too good for DCGAN, with a high FID of 216. Hence we will work on improving this in our next few models, such as the cDCGAN model and SNGAN model.
In [76]:
monitor = callbacks[0]

# Extract the best KL Divergence
best_kl_div = min(history.history['kl_divergence'])

# Extract the best FID Score
best_fid = min(monitor.fid_scores) if monitor.fid_scores else None

# Extract the best IS Score (average)
best_is_avg = max(is_avg for is_avg, _ in monitor.is_scores) if monitor.is_scores else None

# Create a DataFrame to store these best values
dcgan_df = pd.DataFrame({
    'Best KL Divergence': [best_kl_div],
    'Best FID': [best_fid],
    'Best IS': [best_is_avg]
})

# Display the DataFrame
dcgan_df
Out[76]:
Best KL Divergence Best FID Best IS
0 4.918345 216.878999 3.357959

PLOTTING THE MODEL'S PERFORMANCE OVER TIME

  • From the KL Divergence, we see that it starts very high at first and quickly drops, indicating that the generator starts with outputs that are very different from the expected distribution, but rapidly improves.
  • For the discriminator accuracy, we see that the accuracy starts around 60%, spikes, then trends upwards. This is not too ideal, as it suggests the discriminator is getting better at distinguishing real from fake, which could mean the generator is not improving at the same rate, or the discriminator is too powerful.
  • For the losses, the generator loss starts low and increases upwards, suggesting that the generator is struggling to fool the discriminator as training progresses. In contrast, for the discriminator losses, both lines start high (which is good), drop (which might indicate the generator is getting better), but then they tend to flatten out, suggesting that the discriminator is not being challenged enough by the generator.
In [77]:
plot_model_performance(history)
No description has been provided for this image

LOADING AND TESTING THE GENERATOR WEIGHTS ON SYNTHETIC IMAGES

In [78]:
# Loading and testing the generator's weights
generator.load_weights('modelweights/dcgan/epoch_200/generator_weights_epoch_200.h5')
generator.summary()
Model: "DCGAN_Generator"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense_6 (Dense)             (None, 4096)              413696    
                                                                 
 leaky_re_lu_24 (LeakyReLU)  (None, 4096)              0         
                                                                 
 reshape_3 (Reshape)         (None, 4, 4, 256)         0         
                                                                 
 conv2d_transpose_9 (Conv2DT  (None, 8, 8, 128)        524416    
 ranspose)                                                       
                                                                 
 leaky_re_lu_25 (LeakyReLU)  (None, 8, 8, 128)         0         
                                                                 
 conv2d_transpose_10 (Conv2D  (None, 16, 16, 128)      262272    
 Transpose)                                                      
                                                                 
 leaky_re_lu_26 (LeakyReLU)  (None, 16, 16, 128)       0         
                                                                 
 conv2d_transpose_11 (Conv2D  (None, 32, 32, 128)      262272    
 Transpose)                                                      
                                                                 
 leaky_re_lu_27 (LeakyReLU)  (None, 32, 32, 128)       0         
                                                                 
 conv2d_1895 (Conv2D)        (None, 32, 32, 3)         3459      
                                                                 
=================================================================
Total params: 1,466,115
Trainable params: 1,466,115
Non-trainable params: 0
_________________________________________________________________
In [79]:
# Generate random latent vectors
latent_vectors = tf.random.normal(shape=(100, LATENT_DIM))

# Generate images using the loaded generator
generated_images = generator(latent_vectors, training=False)
generated_images = (generated_images + 1) / 2

# Create a grid of subplots to display generated images
fig, axes = plt.subplots(10, 10, figsize=(20, 20))
axes = axes.flatten()

for i, ax in enumerate(axes):
    ax.imshow(generated_images[i], cmap='gray')
    ax.axis('off')

plt.tight_layout()
plt.show()
No description has been provided for this image

MODEL 2 : CONDITIONAL DCGAN WITH GRADIENT TAPE¶

Conditional DCGAN with Gradient Tape

Now, we will be exploring Conditional DCGAN, which extends the DCGAN model by incorporating additional information (conditions), such as class labels or data from other modalities, to direct the generation process. This allows the model to generate data that is conditioned on certain attributes, making the model capable of generating more specific or diverse outputs.

In addition, this model also similarly applies GRADIENT TAPE, same as DCGAN, which is essentially used for automatic differentiation. Gradient Tape records operations for automatic differentiation; that is, it keeps track of operations performed during forward pass so that during the backward pass (backpropagation), gradients can be computed. By utilizing Gradient Tape with a conditional DCGAN, it provides a more fine-grained control over the training process, as it allows for custom and complex gradient computations.

BUILDING THE CDCGAN GENERATOR FUNCTION

In [47]:
def create_generator(latent_dim):
    # foundation for label embeedded input
    label_input = Input(shape=(1,), name='Label_Input')
    label_embedding = Embedding(10, 10, name='Label_Embedding')(label_input)
    
    # linear activation
    label_embedding = Dense(4 * 4, name='Label_Dense')(label_embedding)

    # reshape to additional channel
    label_embedding = Reshape((4, 4, 1), name='Label_Reshape')(label_embedding)

    # foundation for 4x4 image input
    noise_input = Input(shape=(latent_dim,), name='Noise_Input')
    noise_dense = Dense(4 * 4 * 128, name='Noise_Dense')(noise_input)
    noise_dense = ReLU(name='Noise_ReLU')(noise_dense)
    noise_reshape = Reshape((4, 4, 128), name='Noise_Reshape')(noise_dense)

    # concatenate label embedding and image to produce 129-channel output
    concat = Concatenate(name='Concatenate')([noise_reshape, label_embedding])

    # upsample to 8x8
    conv1 = Conv2DTranspose(128, (4, 4), strides=(2, 2), padding='same', name='Conv1')(concat)
    conv1 = ReLU(name='Conv1_ReLU')(conv1)

    # upsample to 16x16
    conv2 = Conv2DTranspose(128, (4, 4), strides=(2, 2), padding='same', name='Conv2')(conv1)
    conv2 = ReLU(name='Conv2_ReLU')(conv2)

    # upsample to 32x32
    conv3 = Conv2DTranspose(128, (4, 4), strides=(2, 2), padding='same', name='Conv3')(conv2)
    conv3 = ReLU(name='Conv3_ReLU')(conv3)

    # output 32x32x3
    output = Conv2D(3, (3, 3), activation='tanh', padding='same', name='Output')(conv3)
    model = Model(inputs=[noise_input, label_input], outputs=output, name='cDCGAN_Generator')

    return model
In [48]:
create_generator(latent_dim=128).summary()
Model: "cDCGAN_Generator"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================
 Noise_Input (InputLayer)       [(None, 128)]        0           []                               
                                                                                                  
 Label_Input (InputLayer)       [(None, 1)]          0           []                               
                                                                                                  
 Noise_Dense (Dense)            (None, 2048)         264192      ['Noise_Input[0][0]']            
                                                                                                  
 Label_Embedding (Embedding)    (None, 1, 10)        100         ['Label_Input[0][0]']            
                                                                                                  
 Noise_ReLU (ReLU)              (None, 2048)         0           ['Noise_Dense[0][0]']            
                                                                                                  
 Label_Dense (Dense)            (None, 1, 16)        176         ['Label_Embedding[0][0]']        
                                                                                                  
 Noise_Reshape (Reshape)        (None, 4, 4, 128)    0           ['Noise_ReLU[0][0]']             
                                                                                                  
 Label_Reshape (Reshape)        (None, 4, 4, 1)      0           ['Label_Dense[0][0]']            
                                                                                                  
 Concatenate (Concatenate)      (None, 4, 4, 129)    0           ['Noise_Reshape[0][0]',          
                                                                  'Label_Reshape[0][0]']          
                                                                                                  
 Conv1 (Conv2DTranspose)        (None, 8, 8, 128)    264320      ['Concatenate[0][0]']            
                                                                                                  
 Conv1_ReLU (ReLU)              (None, 8, 8, 128)    0           ['Conv1[0][0]']                  
                                                                                                  
 Conv2 (Conv2DTranspose)        (None, 16, 16, 128)  262272      ['Conv1_ReLU[0][0]']             
                                                                                                  
 Conv2_ReLU (ReLU)              (None, 16, 16, 128)  0           ['Conv2[0][0]']                  
                                                                                                  
 Conv3 (Conv2DTranspose)        (None, 32, 32, 128)  262272      ['Conv2_ReLU[0][0]']             
                                                                                                  
 Conv3_ReLU (ReLU)              (None, 32, 32, 128)  0           ['Conv3[0][0]']                  
                                                                                                  
 Output (Conv2D)                (None, 32, 32, 3)    3459        ['Conv3_ReLU[0][0]']             
                                                                                                  
==================================================================================================
Total params: 1,056,791
Trainable params: 1,056,791
Non-trainable params: 0
__________________________________________________________________________________________________

BUILDING THE cDCGAN DISCRIMINATOR FUNCTION

In [49]:
def create_discriminator():
    # Foundation for label embedded input
    label_input = Input(shape=(1,), name='Label_Input')
    label_embedding = Embedding(10, 10, name='Label_Embedding')(label_input)
    
    # Linear activation
    label_embedding = Dense(32 * 32, name='Label_Dense')(label_embedding)
    
    # Reshape to additional channel
    label_embedding = Reshape((32, 32, 1), name='Label_Reshape')(label_embedding)

    # Foundation for 32x32 image input
    image_input = Input(shape=(32, 32, 3), name='Image_Input')

    # Concatenate label embedding and image to produce 129-channel input
    concat = Concatenate(name='Concatenate')([image_input, label_embedding])

    # Downsample to 16x16
    conv1 = Conv2D(128, kernel_size=3, strides=2, padding='same', name='Conv1')(concat)
    conv1 = LeakyReLU(alpha=0.2, name='Conv1_Leaky_Relu')(conv1)
    
    # Downsample to 8x8
    conv2 = Conv2D(128, kernel_size=3, strides=2, padding='same', name='Conv2')(conv1)
    conv2 = LeakyReLU(alpha=0.2, name='Conv2_Leaky_Relu')(conv2)
    
    # Downsample to 4x4
    conv3 = Conv2D(128, kernel_size=3, strides=2, padding='same', name='Conv3')(conv2)
    conv3 = LeakyReLU(alpha=0.2, name='Conv3_Leaky_Relu')(conv3)

    # Flatten feature maps
    flat = Flatten(name='Flatten')(conv3)
    output = Dense(units=1, activation='sigmoid', name='Output')(flat)
    model = Model(inputs=[image_input, label_input], outputs=output, name='cDCGAN_Discriminator')

    return model
In [50]:
create_discriminator().summary()
Model: "cDCGAN_Discriminator"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================
 Label_Input (InputLayer)       [(None, 1)]          0           []                               
                                                                                                  
 Label_Embedding (Embedding)    (None, 1, 10)        100         ['Label_Input[0][0]']            
                                                                                                  
 Label_Dense (Dense)            (None, 1, 1024)      11264       ['Label_Embedding[0][0]']        
                                                                                                  
 Image_Input (InputLayer)       [(None, 32, 32, 3)]  0           []                               
                                                                                                  
 Label_Reshape (Reshape)        (None, 32, 32, 1)    0           ['Label_Dense[0][0]']            
                                                                                                  
 Concatenate (Concatenate)      (None, 32, 32, 4)    0           ['Image_Input[0][0]',            
                                                                  'Label_Reshape[0][0]']          
                                                                                                  
 Conv1 (Conv2D)                 (None, 16, 16, 128)  4736        ['Concatenate[0][0]']            
                                                                                                  
 Conv1_Leaky_Relu (LeakyReLU)   (None, 16, 16, 128)  0           ['Conv1[0][0]']                  
                                                                                                  
 Conv2 (Conv2D)                 (None, 8, 8, 128)    147584      ['Conv1_Leaky_Relu[0][0]']       
                                                                                                  
 Conv2_Leaky_Relu (LeakyReLU)   (None, 8, 8, 128)    0           ['Conv2[0][0]']                  
                                                                                                  
 Conv3 (Conv2D)                 (None, 4, 4, 128)    147584      ['Conv2_Leaky_Relu[0][0]']       
                                                                                                  
 Conv3_Leaky_Relu (LeakyReLU)   (None, 4, 4, 128)    0           ['Conv3[0][0]']                  
                                                                                                  
 Flatten (Flatten)              (None, 2048)         0           ['Conv3_Leaky_Relu[0][0]']       
                                                                                                  
 Output (Dense)                 (None, 1)            2049        ['Flatten[0][0]']                
                                                                                                  
==================================================================================================
Total params: 313,317
Trainable params: 313,317
Non-trainable params: 0
__________________________________________________________________________________________________

BUILDING THE TRAINING FUNCTIONS AND CLASSES FOR cDCGAN

In [51]:
class ConditionalDCGAN(Model):
    def __init__(self, generator, discriminator, latent_dim):
        super(ConditionalDCGAN, self).__init__()
        self.generator = generator
        self.discriminator = discriminator
        self.latent_dim = latent_dim

    def compile(self, d_optimizer, g_optimizer, loss_fn):
        super(ConditionalDCGAN, self).compile()
        self.d_optimizer = d_optimizer
        self.g_optimizer = g_optimizer
        self.loss_fn = loss_fn
        self.g_loss_metric = keras.metrics.Mean(name='g_loss')
        self.d_real_loss_metric = keras.metrics.Mean(name='d_real_loss')
        self.d_fake_loss_metric = keras.metrics.Mean(name='d_fake_loss')
        self.d_acc_metric = keras.metrics.BinaryAccuracy(name='d_acc')
        self.kl_metric = keras.metrics.KLDivergence()

    @property
    def metrics(self):
        return [self.g_loss_metric, self.d_real_loss_metric, self.d_fake_loss_metric, self.d_acc_metric, self.kl_metric]

    def train_step(self, data):
        real_images, class_labels = data
        class_labels = tf.cast(class_labels, 'int32')
        batch_size = tf.shape(real_images)[0]

        # train discriminator
        random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))

        fake_labels = tf.zeros((batch_size, 1))  # (batch_size, 1)
        real_labels = tf.ones((batch_size, 1))  # (batch_size, 1)

        # freeze generator
        self.discriminator.trainable = True
        self.generator.trainable = False
    
        with tf.GradientTape() as disc_tape:
            disc_tape.watch(self.discriminator.trainable_variables)
            generated_images = self.generator([random_latent_vectors, class_labels], training=True)
            real_output = self.discriminator([real_images, class_labels], training=True)
            fake_output = self.discriminator([generated_images, class_labels], training=True)
            d_loss_real = self.loss_fn(real_labels, real_output)
            d_loss_fake = self.loss_fn(fake_labels, fake_output)
            d_loss = d_loss_real + d_loss_fake  # log(D(x)) + log(1 - D(G(z))
        
        disc_grads = disc_tape.gradient(d_loss, self.discriminator.trainable_variables)
        self.d_optimizer.apply_gradients(zip(disc_grads, self.discriminator.trainable_variables))

        # train the generator
        random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))
        misleading_labels = tf.ones((batch_size, 1))

        # freeze discriminator
        self.discriminator.trainable = False
        self.generator.trainable = True

        with tf.GradientTape() as gen_tape:
            gen_tape.watch(self.generator.trainable_variables)
            generated_images = self.generator([random_latent_vectors, class_labels], training=True)
            pred_on_fake = self.discriminator([generated_images, class_labels], training=True)
            g_loss = self.loss_fn(misleading_labels, pred_on_fake)  # maximize log(D(G(z))) = minimize -log(1 - D(G(z)))
        
        gen_grads = gen_tape.gradient(g_loss, self.generator.trainable_variables)
        self.g_optimizer.apply_gradients(zip(gen_grads, self.generator.trainable_variables))

        # update metrics
        self.g_loss_metric.update_state(g_loss)
        self.d_real_loss_metric.update_state(d_loss_real)
        self.d_fake_loss_metric.update_state(d_loss_fake)
        self.d_acc_metric.update_state(real_labels, real_output)
        self.kl_metric.update_state(y_true=real_images, y_pred=generated_images)

        return {
            'g_loss': self.g_loss_metric.result(),
            'd_real_loss': self.d_real_loss_metric.result(),
            'd_fake_loss': self.d_fake_loss_metric.result(),
            'd_acc': self.d_acc_metric.result(),
            'kl_divergence': self.kl_metric.result()
        }
In [55]:
class GANMonitor(Callback):
    def __init__(self, latent_dim, class_labels):
        self.latent_dim = latent_dim
        self.class_labels = class_labels
        self.cdcgan_fid_scores = []
        self.cdcgan_is_scores = []

    def on_epoch_end(self, epoch, logs=None):
        # Plot 100 generated images and save weights every 10 epochs
        latent_vectors = tf.random.normal(shape=(100, self.latent_dim))
        class_labels = tf.reshape(tf.range(10), shape=(10, 1))
        class_labels = tf.tile(class_labels, multiples=(1, 10))
        class_labels = tf.reshape(class_labels, shape=(100, 1))

        generated_images = self.model.generator([latent_vectors, class_labels], training=False)
        generated_images = (generated_images + 1) / 2

        if not os.path.exists('modelweights/cdcgan'):
            os.makedirs('modelweights/cdcgan')

        if not os.path.exists('images/cdcgan_images'):
            os.makedirs('images/cdcgan_images')
            
        if (epoch + 1) % 50 == 0:
            # Calculate FID and IS
            is_avg, is_std = calculate_inception_score(generated_images)
            fid = calculate_fid(generated_images)
            
            # Append metrics to lists
            self.cdcgan_fid_scores.append(fid)
            self.cdcgan_is_scores.append((is_avg, is_std))
            
            print(f'Epoch {epoch + 1}: Average (IS): {is_avg} | Std (IS): {is_std} | FID Score: {fid}')

        if (epoch + 1) % 10 == 0:
            if not os.path.exists(f'modelweights/cdcgan/epoch_{epoch + 1}'):
                os.makedirs(f'modelweights/cdcgan/epoch_{epoch + 1}')
                self.model.generator.save_weights(f'modelweights/cdcgan/epoch_{epoch + 1}/generator_weights_epoch_{epoch + 1}.h5')
                self.model.discriminator.save_weights(f'modelweights/cdcgan/epoch_{epoch + 1}/discriminator_weights_epoch_{epoch + 1}.h5')
                print(f'\nSaving Model Weights At Epoch {epoch + 1}.\n')

            fig, axes = plt.subplots(10, 10, figsize=(20, 20))
            axes = axes.flatten()

            for i, ax in enumerate(axes):
                ax.imshow(generated_images[i])
                ax.set_title(self.class_labels[class_labels[i].numpy().item()], fontsize=16)
                ax.axis('off')

            plt.tight_layout()
            plt.savefig(f'images/cdcgan_images/generated_img_{epoch + 1}.png')
            plt.close()
In [56]:
# Defining Constants for the Model
EPOCHS = 200
LATENT_DIM = 128    
LEARNING_RATE = 2e-4
BETA_1 = 0.5
LABEL_SMOOTHING = 0.1

# Defining callbacks for the Model
callbacks = [GANMonitor(LATENT_DIM, class_labels)]

generator = create_generator(LATENT_DIM)
discriminator = create_discriminator()
cdcgan = ConditionalDCGAN(generator, discriminator, latent_dim=LATENT_DIM)
cdcgan.compile(
    g_optimizer=Adam(learning_rate=LEARNING_RATE, beta_1=BETA_1),
    d_optimizer=Adam(learning_rate=LEARNING_RATE, beta_1=BETA_1),
    loss_fn=BinaryCrossentropy(label_smoothing=LABEL_SMOOTHING)
)
In [57]:
history = cdcgan.fit(dataset, epochs=EPOCHS, callbacks=callbacks, use_multiprocessing=True)
Epoch 1/200
391/391 [==============================] - 13s 27ms/step - g_loss: 1.6162 - d_real_loss: 0.4874 - d_fake_loss: 0.4964 - d_acc: 0.8237 - kl_divergence: 5.4479
Epoch 2/200
391/391 [==============================] - 11s 27ms/step - g_loss: 1.6262 - d_real_loss: 0.5470 - d_fake_loss: 0.4331 - d_acc: 0.7348 - kl_divergence: 5.4537
Epoch 3/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.4885 - d_real_loss: 0.5519 - d_fake_loss: 0.4611 - d_acc: 0.7304 - kl_divergence: 4.6261
Epoch 4/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.0319 - d_real_loss: 0.6570 - d_fake_loss: 0.5811 - d_acc: 0.5942 - kl_divergence: 5.0323
Epoch 5/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.0759 - d_real_loss: 0.6324 - d_fake_loss: 0.5847 - d_acc: 0.6427 - kl_divergence: 5.0721
Epoch 6/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.0250 - d_real_loss: 0.6538 - d_fake_loss: 0.6073 - d_acc: 0.6177 - kl_divergence: 5.1832
Epoch 7/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9954 - d_real_loss: 0.6624 - d_fake_loss: 0.6091 - d_acc: 0.5739 - kl_divergence: 5.0948
Epoch 8/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.0284 - d_real_loss: 0.6465 - d_fake_loss: 0.5975 - d_acc: 0.5980 - kl_divergence: 5.0306
Epoch 9/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.1533 - d_real_loss: 0.6146 - d_fake_loss: 0.5585 - d_acc: 0.6591 - kl_divergence: 4.8708
Epoch 10/200
391/391 [==============================] - 19s 49ms/step - g_loss: 1.1453 - d_real_loss: 0.6090 - d_fake_loss: 0.5475 - d_acc: 0.6521 - kl_divergence: 4.8063
Epoch 11/200
391/391 [==============================] - 11s 26ms/step - g_loss: 1.1179 - d_real_loss: 0.6273 - d_fake_loss: 0.5691 - d_acc: 0.6330 - kl_divergence: 4.8166
Epoch 12/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.1613 - d_real_loss: 0.6152 - d_fake_loss: 0.5513 - d_acc: 0.6608 - kl_divergence: 4.7890
Epoch 13/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.2007 - d_real_loss: 0.5929 - d_fake_loss: 0.5382 - d_acc: 0.6771 - kl_divergence: 4.5959
Epoch 14/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.1616 - d_real_loss: 0.6117 - d_fake_loss: 0.5530 - d_acc: 0.6534 - kl_divergence: 4.7199
Epoch 15/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.1240 - d_real_loss: 0.6111 - d_fake_loss: 0.5667 - d_acc: 0.6556 - kl_divergence: 4.6508
Epoch 16/200
391/391 [==============================] - 10s 27ms/step - g_loss: 1.1041 - d_real_loss: 0.6244 - d_fake_loss: 0.5724 - d_acc: 0.6370 - kl_divergence: 4.6060
Epoch 17/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.0318 - d_real_loss: 0.6291 - d_fake_loss: 0.5946 - d_acc: 0.6320 - kl_divergence: 4.7010
Epoch 18/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.0121 - d_real_loss: 0.6274 - d_fake_loss: 0.5933 - d_acc: 0.6390 - kl_divergence: 4.6193
Epoch 19/200
391/391 [==============================] - 10s 27ms/step - g_loss: 1.0447 - d_real_loss: 0.6172 - d_fake_loss: 0.5908 - d_acc: 0.6623 - kl_divergence: 4.6717
Epoch 20/200
391/391 [==============================] - 18s 47ms/step - g_loss: 1.0264 - d_real_loss: 0.6315 - d_fake_loss: 0.6061 - d_acc: 0.6414 - kl_divergence: 4.6296
Epoch 21/200
391/391 [==============================] - 11s 27ms/step - g_loss: 0.9710 - d_real_loss: 0.6388 - d_fake_loss: 0.6158 - d_acc: 0.6259 - kl_divergence: 4.6359
Epoch 22/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9542 - d_real_loss: 0.6409 - d_fake_loss: 0.6203 - d_acc: 0.6172 - kl_divergence: 4.7517
Epoch 23/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9353 - d_real_loss: 0.6442 - d_fake_loss: 0.6242 - d_acc: 0.6152 - kl_divergence: 4.6477
Epoch 24/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9185 - d_real_loss: 0.6470 - d_fake_loss: 0.6271 - d_acc: 0.6073 - kl_divergence: 4.7083
Epoch 25/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9150 - d_real_loss: 0.6505 - d_fake_loss: 0.6325 - d_acc: 0.6035 - kl_divergence: 4.6575
Epoch 26/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9162 - d_real_loss: 0.6529 - d_fake_loss: 0.6364 - d_acc: 0.5971 - kl_divergence: 4.6352
Epoch 27/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9131 - d_real_loss: 0.6473 - d_fake_loss: 0.6313 - d_acc: 0.6077 - kl_divergence: 4.6250
Epoch 28/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9137 - d_real_loss: 0.6481 - d_fake_loss: 0.6301 - d_acc: 0.6071 - kl_divergence: 4.6408
Epoch 29/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8848 - d_real_loss: 0.6504 - d_fake_loss: 0.6348 - d_acc: 0.6025 - kl_divergence: 4.6456
Epoch 30/200
391/391 [==============================] - 18s 47ms/step - g_loss: 0.8891 - d_real_loss: 0.6556 - d_fake_loss: 0.6381 - d_acc: 0.5942 - kl_divergence: 4.6478
Epoch 31/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8776 - d_real_loss: 0.6534 - d_fake_loss: 0.6385 - d_acc: 0.6010 - kl_divergence: 4.6833
Epoch 32/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8893 - d_real_loss: 0.6549 - d_fake_loss: 0.6376 - d_acc: 0.5930 - kl_divergence: 4.6742
Epoch 33/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8824 - d_real_loss: 0.6547 - d_fake_loss: 0.6387 - d_acc: 0.5951 - kl_divergence: 4.6526
Epoch 34/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8755 - d_real_loss: 0.6542 - d_fake_loss: 0.6407 - d_acc: 0.5926 - kl_divergence: 4.6578
Epoch 35/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8757 - d_real_loss: 0.6564 - d_fake_loss: 0.6417 - d_acc: 0.5924 - kl_divergence: 4.6259
Epoch 36/200
391/391 [==============================] - 10s 27ms/step - g_loss: 0.8624 - d_real_loss: 0.6566 - d_fake_loss: 0.6440 - d_acc: 0.5896 - kl_divergence: 4.6087
Epoch 37/200
391/391 [==============================] - 10s 27ms/step - g_loss: 0.8647 - d_real_loss: 0.6583 - d_fake_loss: 0.6429 - d_acc: 0.5860 - kl_divergence: 4.6402
Epoch 38/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8520 - d_real_loss: 0.6584 - d_fake_loss: 0.6450 - d_acc: 0.5877 - kl_divergence: 4.6094
Epoch 39/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8556 - d_real_loss: 0.6610 - d_fake_loss: 0.6483 - d_acc: 0.5804 - kl_divergence: 4.6561
Epoch 40/200
391/391 [==============================] - 19s 48ms/step - g_loss: 0.8411 - d_real_loss: 0.6600 - d_fake_loss: 0.6474 - d_acc: 0.5827 - kl_divergence: 4.6522
Epoch 41/200
391/391 [==============================] - 11s 26ms/step - g_loss: 0.8419 - d_real_loss: 0.6632 - d_fake_loss: 0.6507 - d_acc: 0.5728 - kl_divergence: 4.6019
Epoch 42/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8455 - d_real_loss: 0.6641 - d_fake_loss: 0.6533 - d_acc: 0.5728 - kl_divergence: 4.6200
Epoch 43/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8394 - d_real_loss: 0.6632 - d_fake_loss: 0.6504 - d_acc: 0.5742 - kl_divergence: 4.5772
Epoch 44/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8358 - d_real_loss: 0.6647 - d_fake_loss: 0.6524 - d_acc: 0.5694 - kl_divergence: 4.6310
Epoch 45/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8220 - d_real_loss: 0.6620 - d_fake_loss: 0.6508 - d_acc: 0.5733 - kl_divergence: 4.6655
Epoch 46/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8297 - d_real_loss: 0.6637 - d_fake_loss: 0.6527 - d_acc: 0.5707 - kl_divergence: 4.6035
Epoch 47/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8316 - d_real_loss: 0.6645 - d_fake_loss: 0.6540 - d_acc: 0.5707 - kl_divergence: 4.6540
Epoch 48/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8218 - d_real_loss: 0.6645 - d_fake_loss: 0.6534 - d_acc: 0.5719 - kl_divergence: 4.6241
Epoch 49/200
391/391 [==============================] - 10s 27ms/step - g_loss: 0.8243 - d_real_loss: 0.6666 - d_fake_loss: 0.6547 - d_acc: 0.5655 - kl_divergence: 4.6159
Epoch 50/200
1/1 [==============================] - 1s 891ms/step g_loss: 0.8258 - d_real_loss: 0.6650 - d_fake_loss: 0.6531 - d_acc: 0.5675 - kl_divergence: 4.58
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 47ms/step
4/4 [==============================] - 1s 115ms/step
4/4 [==============================] - 0s 31ms/step
Epoch 50: Average (IS): 2.1257643699645996 | Std (IS): 0.1607736498117447 | FID Score: 235.64452794875638

Saving Model Weights At Epoch 50.

391/391 [==============================] - 35s 88ms/step - g_loss: 0.8260 - d_real_loss: 0.6649 - d_fake_loss: 0.6532 - d_acc: 0.5679 - kl_divergence: 4.5893
Epoch 51/200
391/391 [==============================] - 11s 26ms/step - g_loss: 0.8245 - d_real_loss: 0.6635 - d_fake_loss: 0.6539 - d_acc: 0.5706 - kl_divergence: 4.6152
Epoch 52/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8163 - d_real_loss: 0.6631 - d_fake_loss: 0.6537 - d_acc: 0.5708 - kl_divergence: 4.6143
Epoch 53/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8325 - d_real_loss: 0.6640 - d_fake_loss: 0.6522 - d_acc: 0.5693 - kl_divergence: 4.6303
Epoch 54/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8261 - d_real_loss: 0.6619 - d_fake_loss: 0.6528 - d_acc: 0.5738 - kl_divergence: 4.6413
Epoch 55/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8248 - d_real_loss: 0.6637 - d_fake_loss: 0.6526 - d_acc: 0.5683 - kl_divergence: 4.5909
Epoch 56/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8247 - d_real_loss: 0.6630 - d_fake_loss: 0.6527 - d_acc: 0.5691 - kl_divergence: 4.6592
Epoch 57/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8214 - d_real_loss: 0.6621 - d_fake_loss: 0.6509 - d_acc: 0.5755 - kl_divergence: 4.5980
Epoch 58/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8281 - d_real_loss: 0.6593 - d_fake_loss: 0.6505 - d_acc: 0.5782 - kl_divergence: 4.6361
Epoch 59/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8269 - d_real_loss: 0.6579 - d_fake_loss: 0.6465 - d_acc: 0.5811 - kl_divergence: 4.6362
Epoch 60/200
390/391 [============================>.] - ETA: 0s - g_loss: 0.8361 - d_real_loss: 0.6591 - d_fake_loss: 0.6492 - d_acc: 0.5770 - kl_divergence: 4.6084
Saving Model Weights At Epoch 60.

391/391 [==============================] - 18s 46ms/step - g_loss: 0.8361 - d_real_loss: 0.6593 - d_fake_loss: 0.6492 - d_acc: 0.5768 - kl_divergence: 4.6083
Epoch 61/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8324 - d_real_loss: 0.6581 - d_fake_loss: 0.6473 - d_acc: 0.5780 - kl_divergence: 4.5870
Epoch 62/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8342 - d_real_loss: 0.6547 - d_fake_loss: 0.6439 - d_acc: 0.5838 - kl_divergence: 4.5763
Epoch 63/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8360 - d_real_loss: 0.6540 - d_fake_loss: 0.6431 - d_acc: 0.5901 - kl_divergence: 4.6129
Epoch 64/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8419 - d_real_loss: 0.6542 - d_fake_loss: 0.6429 - d_acc: 0.5869 - kl_divergence: 4.6061
Epoch 65/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8420 - d_real_loss: 0.6542 - d_fake_loss: 0.6428 - d_acc: 0.5877 - kl_divergence: 4.6082
Epoch 66/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8450 - d_real_loss: 0.6521 - d_fake_loss: 0.6401 - d_acc: 0.5909 - kl_divergence: 4.6024
Epoch 67/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8455 - d_real_loss: 0.6507 - d_fake_loss: 0.6383 - d_acc: 0.5909 - kl_divergence: 4.5948
Epoch 68/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8550 - d_real_loss: 0.6523 - d_fake_loss: 0.6391 - d_acc: 0.5882 - kl_divergence: 4.6367
Epoch 69/200
391/391 [==============================] - 10s 27ms/step - g_loss: 0.8545 - d_real_loss: 0.6496 - d_fake_loss: 0.6360 - d_acc: 0.5958 - kl_divergence: 4.5668
Epoch 70/200
391/391 [==============================] - ETA: 0s - g_loss: 0.8525 - d_real_loss: 0.6471 - d_fake_loss: 0.6346 - d_acc: 0.5956 - kl_divergence: 4.6050
Saving Model Weights At Epoch 70.

391/391 [==============================] - 19s 49ms/step - g_loss: 0.8525 - d_real_loss: 0.6471 - d_fake_loss: 0.6346 - d_acc: 0.5956 - kl_divergence: 4.6051
Epoch 71/200
391/391 [==============================] - 11s 26ms/step - g_loss: 0.8550 - d_real_loss: 0.6471 - d_fake_loss: 0.6334 - d_acc: 0.5996 - kl_divergence: 4.5786
Epoch 72/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8535 - d_real_loss: 0.6462 - d_fake_loss: 0.6327 - d_acc: 0.5995 - kl_divergence: 4.6253
Epoch 73/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8600 - d_real_loss: 0.6467 - d_fake_loss: 0.6325 - d_acc: 0.5968 - kl_divergence: 4.6229
Epoch 74/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8578 - d_real_loss: 0.6466 - d_fake_loss: 0.6332 - d_acc: 0.5964 - kl_divergence: 4.6062
Epoch 75/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8613 - d_real_loss: 0.6453 - d_fake_loss: 0.6318 - d_acc: 0.5987 - kl_divergence: 4.5913
Epoch 76/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8593 - d_real_loss: 0.6442 - d_fake_loss: 0.6288 - d_acc: 0.6027 - kl_divergence: 4.5821
Epoch 77/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8585 - d_real_loss: 0.6426 - d_fake_loss: 0.6286 - d_acc: 0.6043 - kl_divergence: 4.5951
Epoch 78/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8650 - d_real_loss: 0.6433 - d_fake_loss: 0.6273 - d_acc: 0.6019 - kl_divergence: 4.6018
Epoch 79/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8706 - d_real_loss: 0.6429 - d_fake_loss: 0.6274 - d_acc: 0.6008 - kl_divergence: 4.6233
Epoch 80/200
389/391 [============================>.] - ETA: 0s - g_loss: 0.8712 - d_real_loss: 0.6424 - d_fake_loss: 0.6268 - d_acc: 0.6010 - kl_divergence: 4.6137
Saving Model Weights At Epoch 80.

391/391 [==============================] - 18s 46ms/step - g_loss: 0.8711 - d_real_loss: 0.6422 - d_fake_loss: 0.6268 - d_acc: 0.6011 - kl_divergence: 4.6136
Epoch 81/200
391/391 [==============================] - 11s 26ms/step - g_loss: 0.8686 - d_real_loss: 0.6407 - d_fake_loss: 0.6247 - d_acc: 0.6069 - kl_divergence: 4.6280
Epoch 82/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8691 - d_real_loss: 0.6390 - d_fake_loss: 0.6229 - d_acc: 0.6087 - kl_divergence: 4.6054
Epoch 83/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8764 - d_real_loss: 0.6389 - d_fake_loss: 0.6225 - d_acc: 0.6067 - kl_divergence: 4.6370
Epoch 84/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8825 - d_real_loss: 0.6405 - d_fake_loss: 0.6244 - d_acc: 0.6059 - kl_divergence: 4.5767
Epoch 85/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8721 - d_real_loss: 0.6366 - d_fake_loss: 0.6200 - d_acc: 0.6094 - kl_divergence: 4.5841
Epoch 86/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8828 - d_real_loss: 0.6381 - d_fake_loss: 0.6206 - d_acc: 0.6093 - kl_divergence: 4.5601
Epoch 87/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8881 - d_real_loss: 0.6369 - d_fake_loss: 0.6195 - d_acc: 0.6080 - kl_divergence: 4.5893
Epoch 88/200
391/391 [==============================] - 10s 27ms/step - g_loss: 0.8849 - d_real_loss: 0.6352 - d_fake_loss: 0.6178 - d_acc: 0.6130 - kl_divergence: 4.6156
Epoch 89/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8843 - d_real_loss: 0.6359 - d_fake_loss: 0.6178 - d_acc: 0.6113 - kl_divergence: 4.6222
Epoch 90/200
389/391 [============================>.] - ETA: 0s - g_loss: 0.8885 - d_real_loss: 0.6350 - d_fake_loss: 0.6162 - d_acc: 0.6102 - kl_divergence: 4.6372
Saving Model Weights At Epoch 90.

391/391 [==============================] - 18s 47ms/step - g_loss: 0.8887 - d_real_loss: 0.6351 - d_fake_loss: 0.6164 - d_acc: 0.6099 - kl_divergence: 4.6368
Epoch 91/200
391/391 [==============================] - 11s 27ms/step - g_loss: 0.8922 - d_real_loss: 0.6341 - d_fake_loss: 0.6161 - d_acc: 0.6118 - kl_divergence: 4.5882
Epoch 92/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8891 - d_real_loss: 0.6328 - d_fake_loss: 0.6137 - d_acc: 0.6143 - kl_divergence: 4.6462
Epoch 93/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8970 - d_real_loss: 0.6330 - d_fake_loss: 0.6131 - d_acc: 0.6140 - kl_divergence: 4.6267
Epoch 94/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8963 - d_real_loss: 0.6321 - d_fake_loss: 0.6118 - d_acc: 0.6150 - kl_divergence: 4.5853
Epoch 95/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8997 - d_real_loss: 0.6331 - d_fake_loss: 0.6129 - d_acc: 0.6122 - kl_divergence: 4.6607
Epoch 96/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.8974 - d_real_loss: 0.6314 - d_fake_loss: 0.6118 - d_acc: 0.6114 - kl_divergence: 4.6130
Epoch 97/200
391/391 [==============================] - 10s 27ms/step - g_loss: 0.9032 - d_real_loss: 0.6312 - d_fake_loss: 0.6108 - d_acc: 0.6158 - kl_divergence: 4.5874
Epoch 98/200
391/391 [==============================] - 10s 27ms/step - g_loss: 0.9057 - d_real_loss: 0.6305 - d_fake_loss: 0.6099 - d_acc: 0.6147 - kl_divergence: 4.6450
Epoch 99/200
391/391 [==============================] - 10s 27ms/step - g_loss: 0.9134 - d_real_loss: 0.6307 - d_fake_loss: 0.6095 - d_acc: 0.6180 - kl_divergence: 4.6262
Epoch 100/200
1/1 [==============================] - 1s 906ms/step g_loss: 0.9088 - d_real_loss: 0.6285 - d_fake_loss: 0.6076 - d_acc: 0.6212 - kl_divergence: 4.64
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
4/4 [==============================] - 1s 99ms/step
4/4 [==============================] - 0s 31ms/step
Epoch 100: Average (IS): 2.5793957710266113 | Std (IS): 0.3097103536128998 | FID Score: 215.85946594003704

Saving Model Weights At Epoch 100.

391/391 [==============================] - 35s 89ms/step - g_loss: 0.9089 - d_real_loss: 0.6283 - d_fake_loss: 0.6078 - d_acc: 0.6214 - kl_divergence: 4.6416
Epoch 101/200
391/391 [==============================] - 11s 27ms/step - g_loss: 0.9074 - d_real_loss: 0.6279 - d_fake_loss: 0.6074 - d_acc: 0.6212 - kl_divergence: 4.5973
Epoch 102/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9081 - d_real_loss: 0.6264 - d_fake_loss: 0.6055 - d_acc: 0.6218 - kl_divergence: 4.6276
Epoch 103/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9152 - d_real_loss: 0.6303 - d_fake_loss: 0.6086 - d_acc: 0.6194 - kl_divergence: 4.6298
Epoch 104/200
391/391 [==============================] - 10s 27ms/step - g_loss: 0.9159 - d_real_loss: 0.6276 - d_fake_loss: 0.6067 - d_acc: 0.6224 - kl_divergence: 4.6003
Epoch 105/200
391/391 [==============================] - 10s 27ms/step - g_loss: 0.9181 - d_real_loss: 0.6264 - d_fake_loss: 0.6053 - d_acc: 0.6244 - kl_divergence: 4.6001
Epoch 106/200
391/391 [==============================] - 11s 27ms/step - g_loss: 0.9185 - d_real_loss: 0.6260 - d_fake_loss: 0.6041 - d_acc: 0.6228 - kl_divergence: 4.6007
Epoch 107/200
391/391 [==============================] - 11s 28ms/step - g_loss: 0.9279 - d_real_loss: 0.6284 - d_fake_loss: 0.6069 - d_acc: 0.6220 - kl_divergence: 4.6262
Epoch 108/200
391/391 [==============================] - 11s 27ms/step - g_loss: 0.9202 - d_real_loss: 0.6247 - d_fake_loss: 0.6027 - d_acc: 0.6245 - kl_divergence: 4.5830
Epoch 109/200
391/391 [==============================] - 11s 27ms/step - g_loss: 0.9238 - d_real_loss: 0.6259 - d_fake_loss: 0.6047 - d_acc: 0.6233 - kl_divergence: 4.5994
Epoch 110/200
390/391 [============================>.] - ETA: 0s - g_loss: 0.9175 - d_real_loss: 0.6257 - d_fake_loss: 0.6022 - d_acc: 0.6222 - kl_divergence: 4.6191
Saving Model Weights At Epoch 110.

391/391 [==============================] - 18s 47ms/step - g_loss: 0.9173 - d_real_loss: 0.6256 - d_fake_loss: 0.6021 - d_acc: 0.6224 - kl_divergence: 4.6190
Epoch 111/200
391/391 [==============================] - 11s 26ms/step - g_loss: 0.9219 - d_real_loss: 0.6236 - d_fake_loss: 0.6011 - d_acc: 0.6253 - kl_divergence: 4.5902
Epoch 112/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9226 - d_real_loss: 0.6249 - d_fake_loss: 0.6018 - d_acc: 0.6240 - kl_divergence: 4.5998
Epoch 113/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9291 - d_real_loss: 0.6254 - d_fake_loss: 0.6016 - d_acc: 0.6245 - kl_divergence: 4.6268
Epoch 114/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9261 - d_real_loss: 0.6238 - d_fake_loss: 0.6001 - d_acc: 0.6259 - kl_divergence: 4.6027
Epoch 115/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9270 - d_real_loss: 0.6231 - d_fake_loss: 0.5990 - d_acc: 0.6255 - kl_divergence: 4.6335
Epoch 116/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9335 - d_real_loss: 0.6245 - d_fake_loss: 0.5996 - d_acc: 0.6215 - kl_divergence: 4.6826
Epoch 117/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9303 - d_real_loss: 0.6219 - d_fake_loss: 0.5974 - d_acc: 0.6263 - kl_divergence: 4.6478
Epoch 118/200
391/391 [==============================] - 10s 27ms/step - g_loss: 0.9305 - d_real_loss: 0.6214 - d_fake_loss: 0.5978 - d_acc: 0.6297 - kl_divergence: 4.5749
Epoch 119/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9310 - d_real_loss: 0.6218 - d_fake_loss: 0.5968 - d_acc: 0.6281 - kl_divergence: 4.6053
Epoch 120/200
391/391 [==============================] - ETA: 0s - g_loss: 0.9424 - d_real_loss: 0.6202 - d_fake_loss: 0.5954 - d_acc: 0.6303 - kl_divergence: 4.6302
Saving Model Weights At Epoch 120.

391/391 [==============================] - 18s 46ms/step - g_loss: 0.9424 - d_real_loss: 0.6202 - d_fake_loss: 0.5954 - d_acc: 0.6303 - kl_divergence: 4.6302
Epoch 121/200
391/391 [==============================] - 11s 27ms/step - g_loss: 0.9373 - d_real_loss: 0.6207 - d_fake_loss: 0.5959 - d_acc: 0.6312 - kl_divergence: 4.6552
Epoch 122/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9400 - d_real_loss: 0.6197 - d_fake_loss: 0.5953 - d_acc: 0.6295 - kl_divergence: 4.6315
Epoch 123/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9335 - d_real_loss: 0.6200 - d_fake_loss: 0.5955 - d_acc: 0.6316 - kl_divergence: 4.6620
Epoch 124/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9478 - d_real_loss: 0.6196 - d_fake_loss: 0.5939 - d_acc: 0.6307 - kl_divergence: 4.6277
Epoch 125/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9388 - d_real_loss: 0.6176 - d_fake_loss: 0.5932 - d_acc: 0.6298 - kl_divergence: 4.6060
Epoch 126/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9336 - d_real_loss: 0.6180 - d_fake_loss: 0.5923 - d_acc: 0.6331 - kl_divergence: 4.6418
Epoch 127/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9473 - d_real_loss: 0.6199 - d_fake_loss: 0.5949 - d_acc: 0.6320 - kl_divergence: 4.6512
Epoch 128/200
391/391 [==============================] - 10s 27ms/step - g_loss: 0.9469 - d_real_loss: 0.6188 - d_fake_loss: 0.5928 - d_acc: 0.6321 - kl_divergence: 4.7002
Epoch 129/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9458 - d_real_loss: 0.6189 - d_fake_loss: 0.5928 - d_acc: 0.6314 - kl_divergence: 4.6477
Epoch 130/200
391/391 [==============================] - ETA: 0s - g_loss: 0.9450 - d_real_loss: 0.6170 - d_fake_loss: 0.5914 - d_acc: 0.6344 - kl_divergence: 4.5971
Saving Model Weights At Epoch 130.

391/391 [==============================] - 19s 49ms/step - g_loss: 0.9450 - d_real_loss: 0.6170 - d_fake_loss: 0.5914 - d_acc: 0.6344 - kl_divergence: 4.5971
Epoch 131/200
391/391 [==============================] - 11s 26ms/step - g_loss: 0.9466 - d_real_loss: 0.6181 - d_fake_loss: 0.5920 - d_acc: 0.6300 - kl_divergence: 4.6481
Epoch 132/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9388 - d_real_loss: 0.6166 - d_fake_loss: 0.5909 - d_acc: 0.6336 - kl_divergence: 4.6505
Epoch 133/200
391/391 [==============================] - 10s 27ms/step - g_loss: 0.9450 - d_real_loss: 0.6169 - d_fake_loss: 0.5912 - d_acc: 0.6333 - kl_divergence: 4.6760
Epoch 134/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9484 - d_real_loss: 0.6179 - d_fake_loss: 0.5912 - d_acc: 0.6337 - kl_divergence: 4.6681
Epoch 135/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9515 - d_real_loss: 0.6170 - d_fake_loss: 0.5910 - d_acc: 0.6324 - kl_divergence: 4.6859
Epoch 136/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9502 - d_real_loss: 0.6179 - d_fake_loss: 0.5904 - d_acc: 0.6320 - kl_divergence: 4.6801
Epoch 137/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9485 - d_real_loss: 0.6172 - d_fake_loss: 0.5912 - d_acc: 0.6325 - kl_divergence: 4.6236
Epoch 138/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9487 - d_real_loss: 0.6153 - d_fake_loss: 0.5885 - d_acc: 0.6356 - kl_divergence: 4.6364
Epoch 139/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9521 - d_real_loss: 0.6164 - d_fake_loss: 0.5901 - d_acc: 0.6350 - kl_divergence: 4.6154
Epoch 140/200
391/391 [==============================] - ETA: 0s - g_loss: 0.9535 - d_real_loss: 0.6134 - d_fake_loss: 0.5871 - d_acc: 0.6396 - kl_divergence: 4.6722
Saving Model Weights At Epoch 140.

391/391 [==============================] - 18s 46ms/step - g_loss: 0.9535 - d_real_loss: 0.6134 - d_fake_loss: 0.5871 - d_acc: 0.6396 - kl_divergence: 4.6721
Epoch 141/200
391/391 [==============================] - 11s 26ms/step - g_loss: 0.9597 - d_real_loss: 0.6145 - d_fake_loss: 0.5870 - d_acc: 0.6359 - kl_divergence: 4.6499
Epoch 142/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9562 - d_real_loss: 0.6126 - d_fake_loss: 0.5859 - d_acc: 0.6425 - kl_divergence: 4.6588
Epoch 143/200
391/391 [==============================] - 10s 27ms/step - g_loss: 0.9599 - d_real_loss: 0.6131 - d_fake_loss: 0.5851 - d_acc: 0.6375 - kl_divergence: 4.6493
Epoch 144/200
391/391 [==============================] - 10s 27ms/step - g_loss: 0.9606 - d_real_loss: 0.6122 - d_fake_loss: 0.5852 - d_acc: 0.6404 - kl_divergence: 4.6535
Epoch 145/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9616 - d_real_loss: 0.6133 - d_fake_loss: 0.5854 - d_acc: 0.6393 - kl_divergence: 4.6730
Epoch 146/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9671 - d_real_loss: 0.6122 - d_fake_loss: 0.5832 - d_acc: 0.6428 - kl_divergence: 4.6222
Epoch 147/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9600 - d_real_loss: 0.6132 - d_fake_loss: 0.5852 - d_acc: 0.6391 - kl_divergence: 4.7058
Epoch 148/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9649 - d_real_loss: 0.6109 - d_fake_loss: 0.5825 - d_acc: 0.6411 - kl_divergence: 4.6470
Epoch 149/200
391/391 [==============================] - 10s 27ms/step - g_loss: 0.9657 - d_real_loss: 0.6117 - d_fake_loss: 0.5839 - d_acc: 0.6420 - kl_divergence: 4.6309
Epoch 150/200
1/1 [==============================] - 1s 922ms/step g_loss: 0.9654 - d_real_loss: 0.6112 - d_fake_loss: 0.5829 - d_acc: 0.6415 - kl_divergence: 4.61
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 31ms/step
4/4 [==============================] - 1s 109ms/step
4/4 [==============================] - 0s 36ms/step
Epoch 150: Average (IS): 2.688732624053955 | Std (IS): 0.3576756417751312 | FID Score: 212.18295839665956

Saving Model Weights At Epoch 150.

391/391 [==============================] - 34s 86ms/step - g_loss: 0.9654 - d_real_loss: 0.6110 - d_fake_loss: 0.5830 - d_acc: 0.6418 - kl_divergence: 4.6194
Epoch 151/200
391/391 [==============================] - 11s 26ms/step - g_loss: 0.9676 - d_real_loss: 0.6105 - d_fake_loss: 0.5816 - d_acc: 0.6427 - kl_divergence: 4.6437
Epoch 152/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9708 - d_real_loss: 0.6102 - d_fake_loss: 0.5811 - d_acc: 0.6440 - kl_divergence: 4.6120
Epoch 153/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9760 - d_real_loss: 0.6098 - d_fake_loss: 0.5809 - d_acc: 0.6425 - kl_divergence: 4.6887
Epoch 154/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9780 - d_real_loss: 0.6079 - d_fake_loss: 0.5788 - d_acc: 0.6453 - kl_divergence: 4.6745
Epoch 155/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9753 - d_real_loss: 0.6089 - d_fake_loss: 0.5797 - d_acc: 0.6450 - kl_divergence: 4.6486
Epoch 156/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9760 - d_real_loss: 0.6080 - d_fake_loss: 0.5788 - d_acc: 0.6486 - kl_divergence: 4.6517
Epoch 157/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9762 - d_real_loss: 0.6127 - d_fake_loss: 0.5851 - d_acc: 0.6439 - kl_divergence: 4.6654
Epoch 158/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9811 - d_real_loss: 0.6056 - d_fake_loss: 0.5774 - d_acc: 0.6484 - kl_divergence: 4.6560
Epoch 159/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9888 - d_real_loss: 0.6071 - d_fake_loss: 0.5786 - d_acc: 0.6482 - kl_divergence: 4.6546
Epoch 160/200
390/391 [============================>.] - ETA: 0s - g_loss: 0.9804 - d_real_loss: 0.6060 - d_fake_loss: 0.5776 - d_acc: 0.6489 - kl_divergence: 4.6841
Saving Model Weights At Epoch 160.

391/391 [==============================] - 20s 50ms/step - g_loss: 0.9801 - d_real_loss: 0.6062 - d_fake_loss: 0.5774 - d_acc: 0.6489 - kl_divergence: 4.6839
Epoch 161/200
391/391 [==============================] - 11s 27ms/step - g_loss: 0.9781 - d_real_loss: 0.6074 - d_fake_loss: 0.5784 - d_acc: 0.6484 - kl_divergence: 4.6177
Epoch 162/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9832 - d_real_loss: 0.6060 - d_fake_loss: 0.5766 - d_acc: 0.6492 - kl_divergence: 4.6624
Epoch 163/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9809 - d_real_loss: 0.6065 - d_fake_loss: 0.5771 - d_acc: 0.6468 - kl_divergence: 4.7220
Epoch 164/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9818 - d_real_loss: 0.6064 - d_fake_loss: 0.5760 - d_acc: 0.6451 - kl_divergence: 4.6760
Epoch 165/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9821 - d_real_loss: 0.6043 - d_fake_loss: 0.5733 - d_acc: 0.6535 - kl_divergence: 4.6914
Epoch 166/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9876 - d_real_loss: 0.6065 - d_fake_loss: 0.5764 - d_acc: 0.6470 - kl_divergence: 4.7177
Epoch 167/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9955 - d_real_loss: 0.6051 - d_fake_loss: 0.5748 - d_acc: 0.6490 - kl_divergence: 4.6601
Epoch 168/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9897 - d_real_loss: 0.6065 - d_fake_loss: 0.5761 - d_acc: 0.6478 - kl_divergence: 4.6555
Epoch 169/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9929 - d_real_loss: 0.6032 - d_fake_loss: 0.5735 - d_acc: 0.6501 - kl_divergence: 4.6744
Epoch 170/200
390/391 [============================>.] - ETA: 0s - g_loss: 0.9979 - d_real_loss: 0.6036 - d_fake_loss: 0.5720 - d_acc: 0.6515 - kl_divergence: 4.6882
Saving Model Weights At Epoch 170.

391/391 [==============================] - 18s 46ms/step - g_loss: 0.9979 - d_real_loss: 0.6036 - d_fake_loss: 0.5720 - d_acc: 0.6514 - kl_divergence: 4.6881
Epoch 171/200
391/391 [==============================] - 11s 26ms/step - g_loss: 0.9992 - d_real_loss: 0.6022 - d_fake_loss: 0.5703 - d_acc: 0.6532 - kl_divergence: 4.6512
Epoch 172/200
391/391 [==============================] - 10s 26ms/step - g_loss: 0.9957 - d_real_loss: 0.6025 - d_fake_loss: 0.5713 - d_acc: 0.6508 - kl_divergence: 4.6616
Epoch 173/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.0026 - d_real_loss: 0.6026 - d_fake_loss: 0.5721 - d_acc: 0.6533 - kl_divergence: 4.7155
Epoch 174/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.0037 - d_real_loss: 0.6014 - d_fake_loss: 0.5703 - d_acc: 0.6561 - kl_divergence: 4.6542
Epoch 175/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.0032 - d_real_loss: 0.6014 - d_fake_loss: 0.5715 - d_acc: 0.6544 - kl_divergence: 4.6601
Epoch 176/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.0080 - d_real_loss: 0.6009 - d_fake_loss: 0.5691 - d_acc: 0.6558 - kl_divergence: 4.6991
Epoch 177/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.0034 - d_real_loss: 0.5988 - d_fake_loss: 0.5670 - d_acc: 0.6575 - kl_divergence: 4.6165
Epoch 178/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.0098 - d_real_loss: 0.5991 - d_fake_loss: 0.5675 - d_acc: 0.6570 - kl_divergence: 4.6604
Epoch 179/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.0140 - d_real_loss: 0.5980 - d_fake_loss: 0.5657 - d_acc: 0.6617 - kl_divergence: 4.6566
Epoch 180/200
389/391 [============================>.] - ETA: 0s - g_loss: 1.0177 - d_real_loss: 0.5982 - d_fake_loss: 0.5657 - d_acc: 0.6606 - kl_divergence: 4.6776
Saving Model Weights At Epoch 180.

391/391 [==============================] - 18s 46ms/step - g_loss: 1.0178 - d_real_loss: 0.5983 - d_fake_loss: 0.5659 - d_acc: 0.6606 - kl_divergence: 4.6775
Epoch 181/200
391/391 [==============================] - 11s 26ms/step - g_loss: 1.0161 - d_real_loss: 0.5980 - d_fake_loss: 0.5662 - d_acc: 0.6564 - kl_divergence: 4.6853
Epoch 182/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.0187 - d_real_loss: 0.5968 - d_fake_loss: 0.5642 - d_acc: 0.6591 - kl_divergence: 4.6653
Epoch 183/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.0208 - d_real_loss: 0.5980 - d_fake_loss: 0.5659 - d_acc: 0.6602 - kl_divergence: 4.6678
Epoch 184/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.0211 - d_real_loss: 0.5965 - d_fake_loss: 0.5647 - d_acc: 0.6622 - kl_divergence: 4.6618
Epoch 185/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.0232 - d_real_loss: 0.5952 - d_fake_loss: 0.5622 - d_acc: 0.6624 - kl_divergence: 4.6923
Epoch 186/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.0254 - d_real_loss: 0.5950 - d_fake_loss: 0.5609 - d_acc: 0.6637 - kl_divergence: 4.6877
Epoch 187/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.0288 - d_real_loss: 0.5959 - d_fake_loss: 0.5626 - d_acc: 0.6621 - kl_divergence: 4.6715
Epoch 188/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.0289 - d_real_loss: 0.5957 - d_fake_loss: 0.5624 - d_acc: 0.6646 - kl_divergence: 4.6408
Epoch 189/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.0328 - d_real_loss: 0.5944 - d_fake_loss: 0.5599 - d_acc: 0.6633 - kl_divergence: 4.6681
Epoch 190/200
390/391 [============================>.] - ETA: 0s - g_loss: 1.0309 - d_real_loss: 0.5948 - d_fake_loss: 0.5602 - d_acc: 0.6668 - kl_divergence: 4.7212
Saving Model Weights At Epoch 190.

391/391 [==============================] - 18s 46ms/step - g_loss: 1.0305 - d_real_loss: 0.5952 - d_fake_loss: 0.5600 - d_acc: 0.6665 - kl_divergence: 4.7211
Epoch 191/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.0338 - d_real_loss: 0.5939 - d_fake_loss: 0.5590 - d_acc: 0.6676 - kl_divergence: 4.6505
Epoch 192/200
391/391 [==============================] - 10s 26ms/step - g_loss: 1.0413 - d_real_loss: 0.5946 - d_fake_loss: 0.5595 - d_acc: 0.6663 - kl_divergence: 4.7195
Epoch 193/200
391/391 [==============================] - 11s 27ms/step - g_loss: 1.0441 - d_real_loss: 0.5924 - d_fake_loss: 0.5559 - d_acc: 0.6696 - kl_divergence: 4.6843
Epoch 194/200
391/391 [==============================] - 11s 27ms/step - g_loss: 1.0472 - d_real_loss: 0.5919 - d_fake_loss: 0.5553 - d_acc: 0.6691 - kl_divergence: 4.6150
Epoch 195/200
391/391 [==============================] - 11s 28ms/step - g_loss: 1.0414 - d_real_loss: 0.5920 - d_fake_loss: 0.5561 - d_acc: 0.6683 - kl_divergence: 4.6998
Epoch 196/200
391/391 [==============================] - 11s 28ms/step - g_loss: 1.0533 - d_real_loss: 0.5911 - d_fake_loss: 0.5547 - d_acc: 0.6686 - kl_divergence: 4.6979
Epoch 197/200
391/391 [==============================] - 11s 27ms/step - g_loss: 1.0507 - d_real_loss: 0.5907 - d_fake_loss: 0.5539 - d_acc: 0.6717 - kl_divergence: 4.6662
Epoch 198/200
391/391 [==============================] - 11s 28ms/step - g_loss: 1.0507 - d_real_loss: 0.5890 - d_fake_loss: 0.5534 - d_acc: 0.6723 - kl_divergence: 4.6884
Epoch 199/200
391/391 [==============================] - 11s 28ms/step - g_loss: 1.0476 - d_real_loss: 0.5908 - d_fake_loss: 0.5544 - d_acc: 0.6710 - kl_divergence: 4.6772
Epoch 200/200
1/1 [==============================] - 1s 927ms/step g_loss: 1.0517 - d_real_loss: 0.5915 - d_fake_loss: 0.5559 - d_acc: 0.6697 - kl_divergence: 4.67
1/1 [==============================] - 0s 41ms/step
1/1 [==============================] - 0s 30ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 41ms/step
1/1 [==============================] - 0s 40ms/step
1/1 [==============================] - 0s 40ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 34ms/step
1/1 [==============================] - 0s 40ms/step
4/4 [==============================] - 1s 157ms/step
4/4 [==============================] - 0s 36ms/step
Epoch 200: Average (IS): 2.7134313583374023 | Std (IS): 0.26960572600364685 | FID Score: 209.35054147224187

Saving Model Weights At Epoch 200.

391/391 [==============================] - 36s 93ms/step - g_loss: 1.0517 - d_real_loss: 0.5915 - d_fake_loss: 0.5559 - d_acc: 0.6697 - kl_divergence: 4.6779

DISPLAYING BEST FID AND INCEPTION SCORES FOR cDCGAN

  • Looking at our FID score, we can see that the best FID score for cDCGAN is actually better than the best score for DCGAN, indicating that this model managed to generate better images overall.
  • In addition, the KL Divergence for cDCGAN is lower, indicating that the generated data distribution for cDCGAN is closer to the real data distribution.
In [65]:
monitor = callbacks[0]

# Extract the best KL Divergence
best_kl_div = min(history.history['kl_divergence'])

# Extract the best FID Score
best_fid = min(monitor.cdcgan_fid_scores) if monitor.cdcgan_fid_scores else None

# Extract the best IS Score (average)
best_is_avg = max(is_avg for is_avg, _ in monitor.cdcgan_is_scores) if monitor.cdcgan_is_scores else None

# Create a DataFrame to store these best values
cdcgan_df = pd.DataFrame({
    'Best KL Divergence': [best_kl_div],
    'Best FID': [best_fid],
    'Best IS': [best_is_avg]
})

# Display the DataFrame
cdcgan_df
Out[65]:
Best KL Divergence Best FID Best IS
0 4.565869 209.350541 2.713431

PLOTTING THE MODEL'S PERFORMANCE OVER TIME

  • From the KL Divergence, we see that it starts off very high and experiences a sharp drop afterwards, followed by a steady and gradual decline. We see some fluctuations, but the overall trend is downwards, suggesting that the generator is improving in capturing the target data distribution. The relatively smooth decline as opposed to the abrupt changes in the initial epochs indicates that the model is stabilizing over time.
  • For the discriminator accuracy, we see that theres a significant spike initially, indicating that the discriminator rapidly improved at the beginning. Following the spike, there's a sharp decline in accuracy, which could suggest that the generator started producing more convincing images, making the discriminator's task more challenging. After the initial fluctuations, the accuracy gradually increases, which implies that the discriminator is becoming better at distinguishing between real and fake images as training progresses.
  • For the losses, the generator loss starts very high, indicating that the generated images were easily distinguished from the real ones. The loss quickly decreases and fluctuates but still trends upwards, indicating that the generator is having a harder time fooling the discriminator as it trains. Meanwhile, the discriminator losses decrease sharply and then level out. The leveling out of the discriminator loss for real images is expected, but the flattening of the loss for fake images suggests that the discriminator is not improving significantly in distinguishing fake images over time.
In [60]:
plot_model_performance(history)
No description has been provided for this image

LOADING AND TESTING THE GENERATOR WEIGHTS ON SYNTHETIC IMAGES

In [61]:
# Loading and testing the generator's weights
generator.load_weights('modelweights/cdcgan/epoch_200/generator_weights_epoch_200.h5')
generator.summary()
Model: "cDCGAN_Generator"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================
 Noise_Input (InputLayer)       [(None, 128)]        0           []                               
                                                                                                  
 Label_Input (InputLayer)       [(None, 1)]          0           []                               
                                                                                                  
 Noise_Dense (Dense)            (None, 2048)         264192      ['Noise_Input[0][0]']            
                                                                                                  
 Label_Embedding (Embedding)    (None, 1, 10)        100         ['Label_Input[0][0]']            
                                                                                                  
 Noise_ReLU (ReLU)              (None, 2048)         0           ['Noise_Dense[0][0]']            
                                                                                                  
 Label_Dense (Dense)            (None, 1, 16)        176         ['Label_Embedding[0][0]']        
                                                                                                  
 Noise_Reshape (Reshape)        (None, 4, 4, 128)    0           ['Noise_ReLU[0][0]']             
                                                                                                  
 Label_Reshape (Reshape)        (None, 4, 4, 1)      0           ['Label_Dense[0][0]']            
                                                                                                  
 Concatenate (Concatenate)      (None, 4, 4, 129)    0           ['Noise_Reshape[0][0]',          
                                                                  'Label_Reshape[0][0]']          
                                                                                                  
 Conv1 (Conv2DTranspose)        (None, 8, 8, 128)    264320      ['Concatenate[0][0]']            
                                                                                                  
 Conv1_ReLU (ReLU)              (None, 8, 8, 128)    0           ['Conv1[0][0]']                  
                                                                                                  
 Conv2 (Conv2DTranspose)        (None, 16, 16, 128)  262272      ['Conv1_ReLU[0][0]']             
                                                                                                  
 Conv2_ReLU (ReLU)              (None, 16, 16, 128)  0           ['Conv2[0][0]']                  
                                                                                                  
 Conv3 (Conv2DTranspose)        (None, 32, 32, 128)  262272      ['Conv2_ReLU[0][0]']             
                                                                                                  
 Conv3_ReLU (ReLU)              (None, 32, 32, 128)  0           ['Conv3[0][0]']                  
                                                                                                  
 Output (Conv2D)                (None, 32, 32, 3)    3459        ['Conv3_ReLU[0][0]']             
                                                                                                  
==================================================================================================
Total params: 1,056,791
Trainable params: 1,056,791
Non-trainable params: 0
__________________________________________________________________________________________________
In [62]:
# Generate random latent vectors and class labels
latent_vectors = tf.random.normal(shape=(100, LATENT_DIM))
class_labels = tf.reshape(tf.range(10), shape=(10, 1))
class_labels = tf.tile(class_labels, multiples=(1, 10))
class_labels = tf.reshape(class_labels, shape=(100, 1))

# Generate images using the loaded generator
generated_images = generator([latent_vectors, class_labels], training=False)
generated_images = (generated_images + 1) / 2

# Create a dictionary to map class labels to their corresponding names
label_map = {
    0: 'Airplane',
    1: 'Automobile',
    2: 'Bird',
    3: 'Cat',
    4: 'Deer',
    5: 'Dog',
    6: 'Frog',
    7: 'Horse',
    8: 'Ship',
    9: 'Truck'
}

# Create a grid of subplots and display generated images with labels
fig, axes = plt.subplots(10, 10, figsize=(20, 20))
axes = axes.flatten()

for i, ax in enumerate(axes):
    ax.imshow(generated_images[i], cmap='gray')
    ax.set_title(label_map[class_labels[i].numpy().item()], fontsize=16)
    ax.axis('off')

plt.tight_layout()
plt.show()
No description has been provided for this image

MODEL 3 : SPECTRAL NORMALIZATION GAN - SNGAN¶

After testing our cDCGAN model, we now experiment with spectral normalization, which is a good way to further help stabilize GAN training. In this case, spectral normalization is applied to the discriminator network, which helps normalize the weights of the network such that it is Lipschitz continuous. This means that for a given function, the magnitude of its gradient should be at most K for every point (Cosgrove, 2018).

The effect of the discriminator being Lipschitz continuous is that it put a constraint on the gradients by enforcing an upper bound. And as a result, reduces the risk of exploding gradients. In addition, spectral normalization controls the variance of the weights in the discriminator preventing vanishing gradients (Lin et al., 2021).

By incorporating spectral normalization, SNGAN overcomes some of the training challenges associated with GANs, such as mode collapse (where the generator produces limited diversity) and vanishing gradients (which hinder convergence). This results in more stable training dynamics and higher-quality generated images.

BUILDING THE SNGAN GENERATOR FUNCTION

In [25]:
def create_generator(latent_dim):
    # foundation for label embedded input
    label_input = Input(shape=(1,), name='Label_Input')
    label_embedding = Embedding(10, 10, name='Label_Embedding')(label_input)
    
    # linear activation
    label_embedding = Dense(4 * 4, name='Label_Dense')(label_embedding)

    # reshape to additional channel
    label_embedding = Reshape((4, 4, 1), name='Label_Reshape')(label_embedding)

    # foundation for 4x4 image input
    noise_input = Input(shape=(latent_dim,), name='Noise_Input')
    noise_dense = Dense(4 * 4 * 128, name='Noise_Dense')(noise_input)
    noise_dense = ReLU(name='Noise_ReLU')(noise_dense)
    noise_reshape = Reshape((4, 4, 128), name='Noise_Reshape')(noise_dense)

    # concatenate label embedding and image to produce 129-channel output
    concat = keras.layers.Concatenate(name='Concatenate')([noise_reshape, label_embedding])

    # upsample to 8x8
    conv1 = Conv2DTranspose(128, (4, 4), strides=(2, 2), padding='same', name='Conv1')(concat)
    conv1 = ReLU(name='Conv1_ReLU')(conv1)

    # upsample to 16x16
    conv2 = Conv2DTranspose(128, (4, 4), strides=(2, 2), padding='same', name='Conv2')(conv1)
    conv2 = ReLU(name='Conv2_ReLU')(conv2)

    # upsample to 32x32
    conv3 = Conv2DTranspose(128, (4, 4), strides=(2, 2), padding='same', name='Conv3')(conv2)
    conv3 = ReLU(name='Conv3_ReLU')(conv3)

    # output 32x32x3
    output = Conv2D(3, (3, 3), activation='tanh', padding='same', name='Output')(conv3)

    model = Model(inputs=[noise_input, label_input], outputs=output, name='SNGAN_Generator')

    return model
In [26]:
create_generator(latent_dim=128).summary()
Model: "SNGAN_Generator"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================
 Noise_Input (InputLayer)       [(None, 128)]        0           []                               
                                                                                                  
 Label_Input (InputLayer)       [(None, 1)]          0           []                               
                                                                                                  
 Noise_Dense (Dense)            (None, 2048)         264192      ['Noise_Input[0][0]']            
                                                                                                  
 Label_Embedding (Embedding)    (None, 1, 10)        100         ['Label_Input[0][0]']            
                                                                                                  
 Noise_ReLU (ReLU)              (None, 2048)         0           ['Noise_Dense[0][0]']            
                                                                                                  
 Label_Dense (Dense)            (None, 1, 16)        176         ['Label_Embedding[0][0]']        
                                                                                                  
 Noise_Reshape (Reshape)        (None, 4, 4, 128)    0           ['Noise_ReLU[0][0]']             
                                                                                                  
 Label_Reshape (Reshape)        (None, 4, 4, 1)      0           ['Label_Dense[0][0]']            
                                                                                                  
 Concatenate (Concatenate)      (None, 4, 4, 129)    0           ['Noise_Reshape[0][0]',          
                                                                  'Label_Reshape[0][0]']          
                                                                                                  
 Conv1 (Conv2DTranspose)        (None, 8, 8, 128)    264320      ['Concatenate[0][0]']            
                                                                                                  
 Conv1_ReLU (ReLU)              (None, 8, 8, 128)    0           ['Conv1[0][0]']                  
                                                                                                  
 Conv2 (Conv2DTranspose)        (None, 16, 16, 128)  262272      ['Conv1_ReLU[0][0]']             
                                                                                                  
 Conv2_ReLU (ReLU)              (None, 16, 16, 128)  0           ['Conv2[0][0]']                  
                                                                                                  
 Conv3 (Conv2DTranspose)        (None, 32, 32, 128)  262272      ['Conv2_ReLU[0][0]']             
                                                                                                  
 Conv3_ReLU (ReLU)              (None, 32, 32, 128)  0           ['Conv3[0][0]']                  
                                                                                                  
 Output (Conv2D)                (None, 32, 32, 3)    3459        ['Conv3_ReLU[0][0]']             
                                                                                                  
==================================================================================================
Total params: 1,056,791
Trainable params: 1,056,791
Non-trainable params: 0
__________________________________________________________________________________________________

BUILDING THE SNGAN DISCRIMINATOR FUNCTION

In [27]:
def create_discriminator():
    # foundation for label embedded input
    label_input = Input(shape=(1,), name='Label_Input')
    label_embedding = Embedding(10, 10, name='Label_Embedding')(label_input)
    
    # linear activation
    label_embedding = Dense(32 * 32, name='Label_Dense')(label_embedding)
    
    # reshape to additional channel
    label_embedding = Reshape((32, 32, 1), name='Label_Reshape')(label_embedding)

    # foundation for 32x32 image input
    image_input = Input(shape=(32, 32, 3), name='Image_Input')

    # concatenate label embedding and image to produce 129-channel input
    concat = keras.layers.Concatenate(name='Concatenate')([image_input, label_embedding])

    # downsample to 16x16
    conv1 = SpectralNormalization(Conv2D(128, kernel_size=3, strides=2, padding='same', name='Conv1'))
    conv1 = conv1(concat)
    conv1 = LeakyReLU(alpha=0.2, name='Conv1_Leaky_ReLU')(conv1)
    
    # downsample to 8x8
    conv2 = SpectralNormalization(Conv2D(128, kernel_size=3, strides=2, padding='same', name='Conv2'))
    conv2 = conv2(conv1)
    conv2 = LeakyReLU(alpha=0.2, name='Conv2_Leaky_ReLU')(conv2)
    
    # downsample to 4x4
    conv3 = SpectralNormalization(Conv2D(128, kernel_size=3, strides=2, padding='same', name='Conv3'))
    conv3 = conv3(conv2)
    conv3 = LeakyReLU(alpha=0.2, name='Conv3_Leaky_ReLU')(conv3)

    # flatten feature maps
    flat = Flatten(name='Flatten')(conv3)
    
    output = Dense(units=1, activation='sigmoid', name='Output')(flat)

    model = Model(inputs=[image_input, label_input], outputs=output, name='SNGAN_Discriminator')

    return model
In [28]:
create_discriminator().summary()
Model: "SNGAN_Discriminator"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================
 Label_Input (InputLayer)       [(None, 1)]          0           []                               
                                                                                                  
 Label_Embedding (Embedding)    (None, 1, 10)        100         ['Label_Input[0][0]']            
                                                                                                  
 Label_Dense (Dense)            (None, 1, 1024)      11264       ['Label_Embedding[0][0]']        
                                                                                                  
 Image_Input (InputLayer)       [(None, 32, 32, 3)]  0           []                               
                                                                                                  
 Label_Reshape (Reshape)        (None, 32, 32, 1)    0           ['Label_Dense[0][0]']            
                                                                                                  
 Concatenate (Concatenate)      (None, 32, 32, 4)    0           ['Image_Input[0][0]',            
                                                                  'Label_Reshape[0][0]']          
                                                                                                  
 spectral_normalization (Spectr  (None, 16, 16, 128)  4864       ['Concatenate[0][0]']            
 alNormalization)                                                                                 
                                                                                                  
 Conv1_Leaky_ReLU (LeakyReLU)   (None, 16, 16, 128)  0           ['spectral_normalization[0][0]'] 
                                                                                                  
 spectral_normalization_1 (Spec  (None, 8, 8, 128)   147712      ['Conv1_Leaky_ReLU[0][0]']       
 tralNormalization)                                                                               
                                                                                                  
 Conv2_Leaky_ReLU (LeakyReLU)   (None, 8, 8, 128)    0           ['spectral_normalization_1[0][0]'
                                                                 ]                                
                                                                                                  
 spectral_normalization_2 (Spec  (None, 4, 4, 128)   147712      ['Conv2_Leaky_ReLU[0][0]']       
 tralNormalization)                                                                               
                                                                                                  
 Conv3_Leaky_ReLU (LeakyReLU)   (None, 4, 4, 128)    0           ['spectral_normalization_2[0][0]'
                                                                 ]                                
                                                                                                  
 Flatten (Flatten)              (None, 2048)         0           ['Conv3_Leaky_ReLU[0][0]']       
                                                                                                  
 Output (Dense)                 (None, 1)            2049        ['Flatten[0][0]']                
                                                                                                  
==================================================================================================
Total params: 313,701
Trainable params: 313,317
Non-trainable params: 384
__________________________________________________________________________________________________

BUILDING THE TRAINING FUNCTIONS AND CLASSES FOR SNGAN

In [29]:
class SNGAN(Model):
    def __init__(self, generator, discriminator, latent_dim):
        super(SNGAN, self).__init__()
        self.generator = generator
        self.discriminator = discriminator
        self.latent_dim = latent_dim

    def compile(self, d_optimizer, g_optimizer, loss_fn):
        super(SNGAN, self).compile()
        self.d_optimizer = d_optimizer
        self.g_optimizer = g_optimizer
        self.loss_fn = loss_fn
        self.g_loss_metric = keras.metrics.Mean(name='g_loss')
        self.d_real_loss_metric = keras.metrics.Mean(name='d_real_loss')
        self.d_fake_loss_metric = keras.metrics.Mean(name='d_fake_loss')
        self.d_acc_metric = keras.metrics.BinaryAccuracy(name='d_acc')
        self.kl_metric = keras.metrics.KLDivergence()

    @property
    def metrics(self):
        return [self.g_loss_metric, self.d_real_loss_metric, self.d_fake_loss_metric, self.d_acc_metric]

    def train_step(self, data):
        real_images, class_labels = data
        class_labels = tf.cast(class_labels, 'int32')
        batch_size = tf.shape(real_images)[0]

        # train discriminator
        random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))
        random_class_labels = tf.random.uniform(shape=(batch_size, 1), minval=0, maxval=10, dtype='int32')

        fake_labels = tf.zeros((batch_size, 1))  # (batch_size, 1)
        real_labels = tf.ones((batch_size, 1))  # (batch_size, 1)

        # freeze generator
        self.discriminator.trainable = True
        self.generator.trainable = False
    
        with tf.GradientTape() as disc_tape:
            disc_tape.watch(self.discriminator.trainable_variables)
            generated_images = self.generator([random_latent_vectors, random_class_labels], training=True)
            real_output = self.discriminator([real_images, class_labels], training=True)
            fake_output = self.discriminator([generated_images, random_class_labels], training=True)
            d_loss_real = self.loss_fn(real_labels, real_output)
            d_loss_fake = self.loss_fn(fake_labels, fake_output)
            d_loss = d_loss_real + d_loss_fake  # log(D(x)) + log(1 - D(G(z))
        
        disc_grads = disc_tape.gradient(d_loss, self.discriminator.trainable_variables)
        self.d_optimizer.apply_gradients(zip(disc_grads, self.discriminator.trainable_variables))

        # train the generator
        random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))
        random_class_labels = tf.random.uniform(shape=(batch_size, 1), minval=0, maxval=10, dtype='int32')
        misleading_labels = tf.ones((batch_size, 1))

        # freeze discriminator
        self.discriminator.trainable = False
        self.generator.trainable = True

        with tf.GradientTape() as gen_tape:
            gen_tape.watch(self.generator.trainable_variables)
            generated_images = self.generator([random_latent_vectors, random_class_labels], training=True)
            pred_on_fake = self.discriminator([generated_images, random_class_labels], training=True)
            # negative log probability of the discriminator making the correct choice
            g_loss = self.loss_fn(misleading_labels, pred_on_fake)  # maximize log(D(G(z))) = minimize -log(1 - D(G(z)))
        
        gen_grads = gen_tape.gradient(g_loss, self.generator.trainable_variables)
        self.g_optimizer.apply_gradients(zip(gen_grads, self.generator.trainable_variables))

        # update metrics
        self.g_loss_metric.update_state(g_loss)
        self.d_real_loss_metric.update_state(d_loss_real)
        self.d_fake_loss_metric.update_state(d_loss_fake)
        self.d_acc_metric.update_state(real_labels, real_output)
        self.kl_metric.update_state(y_true=real_images, y_pred=generated_images)

        return {
            'g_loss': self.g_loss_metric.result(),
            'd_real_loss': self.d_real_loss_metric.result(),
            'd_fake_loss': self.d_fake_loss_metric.result(),
            'd_acc': self.d_acc_metric.result(),
            'kl_divergence': self.kl_metric.result()
        }
In [30]:
class GANMonitor(Callback):
    def __init__(self, latent_dim, class_labels):
        self.latent_dim = latent_dim
        self.class_labels = class_labels
        self.sngan_fid_scores = []
        self.sngan_is_scores = []

    def on_epoch_end(self, epoch, logs=None):
        # plot 100 generated images and save weights every 10 epochs
        latent_vectors = tf.random.normal(shape=(100, self.latent_dim))
        class_labels = tf.reshape(tf.range(10), shape=(10, 1))
        class_labels = tf.tile(class_labels, multiples=(1, 10))
        class_labels = tf.reshape(class_labels, shape=(100, 1))

        generated_images = self.model.generator([latent_vectors, class_labels], training=False)
        generated_images = (generated_images + 1) / 2

        if not os.path.exists('./modelweights/sngan'):
            os.makedirs('./modelweights/sngan')

        if not os.path.exists('./images/sngan_images'):
            os.makedirs('./images/sngan_images')
            
        if (epoch + 1) % 50 == 0:
            # Calculate FID and IS
            is_avg, is_std = calculate_inception_score(generated_images)
            fid = calculate_fid(generated_images)
            
            # Append metrics to lists
            self.sngan_fid_scores.append(fid)
            self.sngan_is_scores.append((is_avg, is_std))
            
            print(f'Epoch {epoch + 1}: Average (IS): {is_avg} | Std (IS): {is_std} | FID Score: {fid}')

        if (epoch + 1) % 10 == 0:
            if not os.path.exists(f'./modelweights/sngan/epoch_{epoch + 1}'):
                os.makedirs(f'./modelweights/sngan/epoch_{epoch + 1}')
                self.model.generator.save_weights(f'./modelweights/sngan/epoch_{epoch + 1}/generator_weights_epoch_{epoch + 1}.h5')
                self.model.discriminator.save_weights(f'./modelweights/sngan/epoch_{epoch + 1}/discriminator_weights_epoch_{epoch + 1}.h5')
                print(f'\nSaving Model Weights at Epoch {epoch + 1}.\n')

            fig, axes = plt.subplots(10, 10, figsize=(20, 20))
            axes = axes.flatten()

            for i, ax in enumerate(axes):
                ax.imshow(generated_images[i])
                ax.set_title(self.class_labels[class_labels[i].numpy().item()], fontsize=16)
                ax.axis('off')

            plt.tight_layout()
            plt.savefig(f'./images/sngan_images/generated_img_{epoch + 1}.png')
            plt.close()
In [31]:
EPOCHS = 200
LATENT_DIM = 128    
LEARNING_RATE = 2e-4
BETA_1 = 0.5
LABEL_SMOOTHING = 0.1

callbacks = [GANMonitor(LATENT_DIM, class_labels)]

generator = create_generator(LATENT_DIM)
discriminator = create_discriminator()
sngan = SNGAN(generator, discriminator, latent_dim=LATENT_DIM)
sngan.compile(
    g_optimizer=Adam(learning_rate=LEARNING_RATE, beta_1=BETA_1),
    d_optimizer=Adam(learning_rate=LEARNING_RATE, beta_1=BETA_1),
    loss_fn=BinaryCrossentropy(label_smoothing=LABEL_SMOOTHING)
)
In [32]:
history = sngan.fit(dataset, epochs=EPOCHS, callbacks=callbacks, use_multiprocessing=True)
Epoch 1/200
391/391 [==============================] - 34s 36ms/step - g_loss: 1.7644 - d_real_loss: 0.4341 - d_fake_loss: 0.4259 - d_acc: 0.8582 - kl_divergence: 5.3875
Epoch 2/200
391/391 [==============================] - 14s 35ms/step - g_loss: 1.6511 - d_real_loss: 0.5495 - d_fake_loss: 0.4368 - d_acc: 0.7396 - kl_divergence: 5.8010
Epoch 3/200
391/391 [==============================] - 13s 34ms/step - g_loss: 1.8632 - d_real_loss: 0.4584 - d_fake_loss: 0.3630 - d_acc: 0.8250 - kl_divergence: 5.9702
Epoch 4/200
391/391 [==============================] - 13s 34ms/step - g_loss: 1.5597 - d_real_loss: 0.5278 - d_fake_loss: 0.4390 - d_acc: 0.7743 - kl_divergence: 5.4570
Epoch 5/200
391/391 [==============================] - 13s 34ms/step - g_loss: 1.2527 - d_real_loss: 0.5611 - d_fake_loss: 0.5217 - d_acc: 0.7383 - kl_divergence: 5.2843
Epoch 6/200
391/391 [==============================] - 13s 34ms/step - g_loss: 1.1731 - d_real_loss: 0.5730 - d_fake_loss: 0.5377 - d_acc: 0.7374 - kl_divergence: 5.2278
Epoch 7/200
391/391 [==============================] - 13s 34ms/step - g_loss: 1.2158 - d_real_loss: 0.5714 - d_fake_loss: 0.5381 - d_acc: 0.7368 - kl_divergence: 5.1759
Epoch 8/200
391/391 [==============================] - 14s 35ms/step - g_loss: 1.2482 - d_real_loss: 0.5483 - d_fake_loss: 0.5173 - d_acc: 0.7593 - kl_divergence: 5.1311
Epoch 9/200
391/391 [==============================] - 13s 34ms/step - g_loss: 1.3191 - d_real_loss: 0.5410 - d_fake_loss: 0.5131 - d_acc: 0.7736 - kl_divergence: 5.0918
Epoch 10/200
391/391 [==============================] - 22s 57ms/step - g_loss: 1.3315 - d_real_loss: 0.5452 - d_fake_loss: 0.5105 - d_acc: 0.7647 - kl_divergence: 5.0603
Epoch 11/200
391/391 [==============================] - 14s 34ms/step - g_loss: 1.2482 - d_real_loss: 0.5645 - d_fake_loss: 0.5424 - d_acc: 0.7444 - kl_divergence: 5.0457
Epoch 12/200
391/391 [==============================] - 13s 34ms/step - g_loss: 1.2544 - d_real_loss: 0.5611 - d_fake_loss: 0.5391 - d_acc: 0.7469 - kl_divergence: 5.0323
Epoch 13/200
391/391 [==============================] - 13s 34ms/step - g_loss: 1.1713 - d_real_loss: 0.5562 - d_fake_loss: 0.5385 - d_acc: 0.7591 - kl_divergence: 5.0213
Epoch 14/200
391/391 [==============================] - 13s 34ms/step - g_loss: 1.2054 - d_real_loss: 0.5778 - d_fake_loss: 0.5591 - d_acc: 0.7283 - kl_divergence: 5.0058
Epoch 15/200
391/391 [==============================] - 14s 35ms/step - g_loss: 1.1427 - d_real_loss: 0.5713 - d_fake_loss: 0.5572 - d_acc: 0.7412 - kl_divergence: 4.9918
Epoch 16/200
391/391 [==============================] - 14s 35ms/step - g_loss: 1.1262 - d_real_loss: 0.5689 - d_fake_loss: 0.5519 - d_acc: 0.7407 - kl_divergence: 4.9830
Epoch 17/200
391/391 [==============================] - 14s 35ms/step - g_loss: 1.1088 - d_real_loss: 0.5663 - d_fake_loss: 0.5526 - d_acc: 0.7462 - kl_divergence: 4.9738
Epoch 18/200
391/391 [==============================] - 14s 35ms/step - g_loss: 1.0829 - d_real_loss: 0.5745 - d_fake_loss: 0.5590 - d_acc: 0.7351 - kl_divergence: 4.9656
Epoch 19/200
391/391 [==============================] - 13s 34ms/step - g_loss: 1.0755 - d_real_loss: 0.5803 - d_fake_loss: 0.5679 - d_acc: 0.7272 - kl_divergence: 4.9593
Epoch 20/200
390/391 [============================>.] - ETA: 0s - g_loss: 1.0693 - d_real_loss: 0.5791 - d_fake_loss: 0.5671 - d_acc: 0.7271 - kl_divergence: 4.9549
Saving Model Weights at Epoch 20.

391/391 [==============================] - 22s 57ms/step - g_loss: 1.0689 - d_real_loss: 0.5792 - d_fake_loss: 0.5671 - d_acc: 0.7269 - kl_divergence: 4.9549
Epoch 21/200
391/391 [==============================] - 13s 34ms/step - g_loss: 1.0800 - d_real_loss: 0.5946 - d_fake_loss: 0.5794 - d_acc: 0.7048 - kl_divergence: 4.9525
Epoch 22/200
391/391 [==============================] - 13s 34ms/step - g_loss: 1.0207 - d_real_loss: 0.5988 - d_fake_loss: 0.5877 - d_acc: 0.6989 - kl_divergence: 4.9499
Epoch 23/200
391/391 [==============================] - 13s 34ms/step - g_loss: 1.0064 - d_real_loss: 0.6068 - d_fake_loss: 0.5931 - d_acc: 0.6909 - kl_divergence: 4.9475
Epoch 24/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.9772 - d_real_loss: 0.6115 - d_fake_loss: 0.6001 - d_acc: 0.6814 - kl_divergence: 4.9455
Epoch 25/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.9694 - d_real_loss: 0.6232 - d_fake_loss: 0.6110 - d_acc: 0.6656 - kl_divergence: 4.9434
Epoch 26/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.9522 - d_real_loss: 0.6270 - d_fake_loss: 0.6158 - d_acc: 0.6611 - kl_divergence: 4.9407
Epoch 27/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.9320 - d_real_loss: 0.6294 - d_fake_loss: 0.6189 - d_acc: 0.6558 - kl_divergence: 4.9381
Epoch 28/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.9313 - d_real_loss: 0.6358 - d_fake_loss: 0.6225 - d_acc: 0.6503 - kl_divergence: 4.9359
Epoch 29/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.9329 - d_real_loss: 0.6370 - d_fake_loss: 0.6242 - d_acc: 0.6447 - kl_divergence: 4.9329
Epoch 30/200
389/391 [============================>.] - ETA: 0s - g_loss: 0.9291 - d_real_loss: 0.6397 - d_fake_loss: 0.6273 - d_acc: 0.6384 - kl_divergence: 4.9308
Saving Model Weights at Epoch 30.

391/391 [==============================] - 22s 56ms/step - g_loss: 0.9293 - d_real_loss: 0.6399 - d_fake_loss: 0.6273 - d_acc: 0.6381 - kl_divergence: 4.9308
Epoch 31/200
391/391 [==============================] - 14s 34ms/step - g_loss: 0.9094 - d_real_loss: 0.6361 - d_fake_loss: 0.6243 - d_acc: 0.6479 - kl_divergence: 4.9287
Epoch 32/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.9109 - d_real_loss: 0.6371 - d_fake_loss: 0.6266 - d_acc: 0.6438 - kl_divergence: 4.9271
Epoch 33/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.9199 - d_real_loss: 0.6378 - d_fake_loss: 0.6262 - d_acc: 0.6415 - kl_divergence: 4.9257
Epoch 34/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.9159 - d_real_loss: 0.6396 - d_fake_loss: 0.6275 - d_acc: 0.6384 - kl_divergence: 4.9249
Epoch 35/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8923 - d_real_loss: 0.6378 - d_fake_loss: 0.6284 - d_acc: 0.6425 - kl_divergence: 4.9243
Epoch 36/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.9051 - d_real_loss: 0.6424 - d_fake_loss: 0.6324 - d_acc: 0.6385 - kl_divergence: 4.9235
Epoch 37/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8836 - d_real_loss: 0.6417 - d_fake_loss: 0.6319 - d_acc: 0.6352 - kl_divergence: 4.9224
Epoch 38/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8895 - d_real_loss: 0.6478 - d_fake_loss: 0.6388 - d_acc: 0.6291 - kl_divergence: 4.9207
Epoch 39/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.8620 - d_real_loss: 0.6478 - d_fake_loss: 0.6384 - d_acc: 0.6286 - kl_divergence: 4.9202
Epoch 40/200
391/391 [==============================] - ETA: 0s - g_loss: 0.8643 - d_real_loss: 0.6517 - d_fake_loss: 0.6429 - d_acc: 0.6229 - kl_divergence: 4.9194
Saving Model Weights at Epoch 40.

391/391 [==============================] - 22s 55ms/step - g_loss: 0.8643 - d_real_loss: 0.6517 - d_fake_loss: 0.6429 - d_acc: 0.6229 - kl_divergence: 4.9194
Epoch 41/200
391/391 [==============================] - 14s 34ms/step - g_loss: 0.8521 - d_real_loss: 0.6511 - d_fake_loss: 0.6430 - d_acc: 0.6240 - kl_divergence: 4.9187
Epoch 42/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8555 - d_real_loss: 0.6555 - d_fake_loss: 0.6454 - d_acc: 0.6131 - kl_divergence: 4.9173
Epoch 43/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8408 - d_real_loss: 0.6526 - d_fake_loss: 0.6452 - d_acc: 0.6220 - kl_divergence: 4.9166
Epoch 44/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8469 - d_real_loss: 0.6544 - d_fake_loss: 0.6454 - d_acc: 0.6172 - kl_divergence: 4.9162
Epoch 45/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8463 - d_real_loss: 0.6553 - d_fake_loss: 0.6471 - d_acc: 0.6139 - kl_divergence: 4.9155
Epoch 46/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8315 - d_real_loss: 0.6536 - d_fake_loss: 0.6470 - d_acc: 0.6171 - kl_divergence: 4.9149
Epoch 47/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8335 - d_real_loss: 0.6567 - d_fake_loss: 0.6492 - d_acc: 0.6106 - kl_divergence: 4.9147
Epoch 48/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8264 - d_real_loss: 0.6575 - d_fake_loss: 0.6507 - d_acc: 0.6151 - kl_divergence: 4.9143
Epoch 49/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8244 - d_real_loss: 0.6582 - d_fake_loss: 0.6515 - d_acc: 0.6089 - kl_divergence: 4.9136
Epoch 50/200
1/1 [==============================] - 1s 1s/steps - g_loss: 0.8191 - d_real_loss: 0.6570 - d_fake_loss: 0.6516 - d_acc: 0.6149 - kl_divergence: 4.91
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 39ms/step
1/1 [==============================] - 0s 48ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 38ms/step
1/1 [==============================] - 0s 40ms/step
1/1 [==============================] - 0s 41ms/step
1/1 [==============================] - 0s 38ms/step
1/1 [==============================] - 0s 40ms/step
4/4 [==============================] - 2s 61ms/step
4/4 [==============================] - 0s 31ms/step
Epoch 50: Average (IS): 2.4155774116516113 | Std (IS): 0.3610619306564331 | FID Score: 241.98800684407553

Saving Model Weights at Epoch 50.

391/391 [==============================] - 40s 103ms/step - g_loss: 0.8192 - d_real_loss: 0.6569 - d_fake_loss: 0.6516 - d_acc: 0.6148 - kl_divergence: 4.9132
Epoch 51/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.8215 - d_real_loss: 0.6583 - d_fake_loss: 0.6522 - d_acc: 0.6109 - kl_divergence: 4.9130
Epoch 52/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8194 - d_real_loss: 0.6597 - d_fake_loss: 0.6539 - d_acc: 0.6091 - kl_divergence: 4.9130
Epoch 53/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8201 - d_real_loss: 0.6592 - d_fake_loss: 0.6539 - d_acc: 0.6101 - kl_divergence: 4.9126
Epoch 54/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.8178 - d_real_loss: 0.6585 - d_fake_loss: 0.6536 - d_acc: 0.6106 - kl_divergence: 4.9119
Epoch 55/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.8279 - d_real_loss: 0.6608 - d_fake_loss: 0.6536 - d_acc: 0.6055 - kl_divergence: 4.9115
Epoch 56/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.8293 - d_real_loss: 0.6612 - d_fake_loss: 0.6524 - d_acc: 0.6023 - kl_divergence: 4.9108
Epoch 57/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.8186 - d_real_loss: 0.6580 - d_fake_loss: 0.6533 - d_acc: 0.6123 - kl_divergence: 4.9102
Epoch 58/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8214 - d_real_loss: 0.6596 - d_fake_loss: 0.6535 - d_acc: 0.6058 - kl_divergence: 4.9100
Epoch 59/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8252 - d_real_loss: 0.6606 - d_fake_loss: 0.6534 - d_acc: 0.6064 - kl_divergence: 4.9094
Epoch 60/200
390/391 [============================>.] - ETA: 0s - g_loss: 0.8119 - d_real_loss: 0.6578 - d_fake_loss: 0.6536 - d_acc: 0.6099 - kl_divergence: 4.9089
Saving Model Weights at Epoch 60.

391/391 [==============================] - 22s 57ms/step - g_loss: 0.8122 - d_real_loss: 0.6576 - d_fake_loss: 0.6536 - d_acc: 0.6101 - kl_divergence: 4.9089
Epoch 61/200
391/391 [==============================] - 14s 34ms/step - g_loss: 0.8108 - d_real_loss: 0.6591 - d_fake_loss: 0.6539 - d_acc: 0.6094 - kl_divergence: 4.9086
Epoch 62/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8078 - d_real_loss: 0.6592 - d_fake_loss: 0.6548 - d_acc: 0.6080 - kl_divergence: 4.9084
Epoch 63/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8080 - d_real_loss: 0.6605 - d_fake_loss: 0.6554 - d_acc: 0.6078 - kl_divergence: 4.9081
Epoch 64/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8056 - d_real_loss: 0.6604 - d_fake_loss: 0.6562 - d_acc: 0.6066 - kl_divergence: 4.9077
Epoch 65/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8079 - d_real_loss: 0.6608 - d_fake_loss: 0.6564 - d_acc: 0.6053 - kl_divergence: 4.9080
Epoch 66/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8100 - d_real_loss: 0.6610 - d_fake_loss: 0.6563 - d_acc: 0.6056 - kl_divergence: 4.9079
Epoch 67/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8071 - d_real_loss: 0.6591 - d_fake_loss: 0.6548 - d_acc: 0.6065 - kl_divergence: 4.9078
Epoch 68/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8110 - d_real_loss: 0.6604 - d_fake_loss: 0.6550 - d_acc: 0.6048 - kl_divergence: 4.9075
Epoch 69/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8058 - d_real_loss: 0.6600 - d_fake_loss: 0.6551 - d_acc: 0.6057 - kl_divergence: 4.9074
Epoch 70/200
390/391 [============================>.] - ETA: 0s - g_loss: 0.8060 - d_real_loss: 0.6611 - d_fake_loss: 0.6557 - d_acc: 0.6058 - kl_divergence: 4.9072
Saving Model Weights at Epoch 70.

391/391 [==============================] - 21s 55ms/step - g_loss: 0.8060 - d_real_loss: 0.6610 - d_fake_loss: 0.6558 - d_acc: 0.6060 - kl_divergence: 4.9072
Epoch 71/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.8080 - d_real_loss: 0.6625 - d_fake_loss: 0.6569 - d_acc: 0.6026 - kl_divergence: 4.9072
Epoch 72/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8030 - d_real_loss: 0.6619 - d_fake_loss: 0.6559 - d_acc: 0.6064 - kl_divergence: 4.9069
Epoch 73/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.8032 - d_real_loss: 0.6605 - d_fake_loss: 0.6564 - d_acc: 0.6084 - kl_divergence: 4.9065
Epoch 74/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.8060 - d_real_loss: 0.6618 - d_fake_loss: 0.6566 - d_acc: 0.6015 - kl_divergence: 4.9066
Epoch 75/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.8015 - d_real_loss: 0.6613 - d_fake_loss: 0.6568 - d_acc: 0.6011 - kl_divergence: 4.9066
Epoch 76/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8038 - d_real_loss: 0.6622 - d_fake_loss: 0.6563 - d_acc: 0.6052 - kl_divergence: 4.9068
Epoch 77/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8001 - d_real_loss: 0.6617 - d_fake_loss: 0.6565 - d_acc: 0.6054 - kl_divergence: 4.9070
Epoch 78/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8069 - d_real_loss: 0.6620 - d_fake_loss: 0.6567 - d_acc: 0.6028 - kl_divergence: 4.9073
Epoch 79/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.8063 - d_real_loss: 0.6615 - d_fake_loss: 0.6559 - d_acc: 0.6053 - kl_divergence: 4.9074
Epoch 80/200
390/391 [============================>.] - ETA: 0s - g_loss: 0.7989 - d_real_loss: 0.6609 - d_fake_loss: 0.6564 - d_acc: 0.6051 - kl_divergence: 4.9075
Saving Model Weights at Epoch 80.

391/391 [==============================] - 22s 57ms/step - g_loss: 0.7990 - d_real_loss: 0.6609 - d_fake_loss: 0.6563 - d_acc: 0.6051 - kl_divergence: 4.9075
Epoch 81/200
391/391 [==============================] - 14s 34ms/step - g_loss: 0.8051 - d_real_loss: 0.6620 - d_fake_loss: 0.6561 - d_acc: 0.6072 - kl_divergence: 4.9075
Epoch 82/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.8027 - d_real_loss: 0.6610 - d_fake_loss: 0.6563 - d_acc: 0.6039 - kl_divergence: 4.9076
Epoch 83/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8028 - d_real_loss: 0.6602 - d_fake_loss: 0.6556 - d_acc: 0.6074 - kl_divergence: 4.9077
Epoch 84/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8018 - d_real_loss: 0.6613 - d_fake_loss: 0.6568 - d_acc: 0.6037 - kl_divergence: 4.9074
Epoch 85/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8012 - d_real_loss: 0.6626 - d_fake_loss: 0.6575 - d_acc: 0.5996 - kl_divergence: 4.9073
Epoch 86/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7983 - d_real_loss: 0.6620 - d_fake_loss: 0.6577 - d_acc: 0.6021 - kl_divergence: 4.9073
Epoch 87/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7941 - d_real_loss: 0.6616 - d_fake_loss: 0.6574 - d_acc: 0.6031 - kl_divergence: 4.9073
Epoch 88/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7982 - d_real_loss: 0.6636 - d_fake_loss: 0.6590 - d_acc: 0.5978 - kl_divergence: 4.9073
Epoch 89/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7970 - d_real_loss: 0.6622 - d_fake_loss: 0.6579 - d_acc: 0.6035 - kl_divergence: 4.9072
Epoch 90/200
391/391 [==============================] - ETA: 0s - g_loss: 0.7978 - d_real_loss: 0.6632 - d_fake_loss: 0.6587 - d_acc: 0.5969 - kl_divergence: 4.9070
Saving Model Weights at Epoch 90.

391/391 [==============================] - 22s 55ms/step - g_loss: 0.7978 - d_real_loss: 0.6632 - d_fake_loss: 0.6587 - d_acc: 0.5969 - kl_divergence: 4.9070
Epoch 91/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7974 - d_real_loss: 0.6641 - d_fake_loss: 0.6595 - d_acc: 0.5931 - kl_divergence: 4.9069
Epoch 92/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7938 - d_real_loss: 0.6633 - d_fake_loss: 0.6591 - d_acc: 0.5971 - kl_divergence: 4.9068
Epoch 93/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7956 - d_real_loss: 0.6647 - d_fake_loss: 0.6598 - d_acc: 0.5944 - kl_divergence: 4.9068
Epoch 94/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7944 - d_real_loss: 0.6642 - d_fake_loss: 0.6598 - d_acc: 0.5981 - kl_divergence: 4.9066
Epoch 95/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7902 - d_real_loss: 0.6649 - d_fake_loss: 0.6604 - d_acc: 0.5978 - kl_divergence: 4.9067
Epoch 96/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7915 - d_real_loss: 0.6655 - d_fake_loss: 0.6611 - d_acc: 0.5929 - kl_divergence: 4.9066
Epoch 97/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7860 - d_real_loss: 0.6652 - d_fake_loss: 0.6618 - d_acc: 0.5926 - kl_divergence: 4.9067
Epoch 98/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7871 - d_real_loss: 0.6650 - d_fake_loss: 0.6607 - d_acc: 0.5989 - kl_divergence: 4.9067
Epoch 99/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7887 - d_real_loss: 0.6661 - d_fake_loss: 0.6626 - d_acc: 0.5933 - kl_divergence: 4.9067
Epoch 100/200
1/1 [==============================] - 1s 925ms/step g_loss: 0.7911 - d_real_loss: 0.6683 - d_fake_loss: 0.6639 - d_acc: 0.5890 - kl_divergence: 4.90
1/1 [==============================] - 0s 32ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 32ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 38ms/step
1/1 [==============================] - 0s 41ms/step
1/1 [==============================] - 0s 32ms/step
4/4 [==============================] - 1s 93ms/step
4/4 [==============================] - 0s 32ms/step
Epoch 100: Average (IS): 2.578585147857666 | Std (IS): 0.3237943947315216 | FID Score: 225.4718659167719

Saving Model Weights at Epoch 100.

391/391 [==============================] - 39s 99ms/step - g_loss: 0.7912 - d_real_loss: 0.6681 - d_fake_loss: 0.6640 - d_acc: 0.5892 - kl_divergence: 4.9067
Epoch 101/200
391/391 [==============================] - 14s 34ms/step - g_loss: 0.7816 - d_real_loss: 0.6654 - d_fake_loss: 0.6616 - d_acc: 0.5957 - kl_divergence: 4.9066
Epoch 102/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7850 - d_real_loss: 0.6673 - d_fake_loss: 0.6636 - d_acc: 0.5916 - kl_divergence: 4.9064
Epoch 103/200
391/391 [==============================] - 14s 36ms/step - g_loss: 0.7906 - d_real_loss: 0.6683 - d_fake_loss: 0.6626 - d_acc: 0.5892 - kl_divergence: 4.9064
Epoch 104/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7828 - d_real_loss: 0.6677 - d_fake_loss: 0.6642 - d_acc: 0.5911 - kl_divergence: 4.9062
Epoch 105/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7885 - d_real_loss: 0.6688 - d_fake_loss: 0.6652 - d_acc: 0.5871 - kl_divergence: 4.9062
Epoch 106/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7869 - d_real_loss: 0.6681 - d_fake_loss: 0.6639 - d_acc: 0.5896 - kl_divergence: 4.9062
Epoch 107/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7819 - d_real_loss: 0.6677 - d_fake_loss: 0.6641 - d_acc: 0.5926 - kl_divergence: 4.9062
Epoch 108/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7831 - d_real_loss: 0.6686 - d_fake_loss: 0.6646 - d_acc: 0.5887 - kl_divergence: 4.9062
Epoch 109/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7874 - d_real_loss: 0.6697 - d_fake_loss: 0.6656 - d_acc: 0.5866 - kl_divergence: 4.9061
Epoch 110/200
391/391 [==============================] - ETA: 0s - g_loss: 0.7768 - d_real_loss: 0.6677 - d_fake_loss: 0.6651 - d_acc: 0.5926 - kl_divergence: 4.9059
Saving Model Weights at Epoch 110.

391/391 [==============================] - 22s 55ms/step - g_loss: 0.7768 - d_real_loss: 0.6677 - d_fake_loss: 0.6651 - d_acc: 0.5926 - kl_divergence: 4.9059
Epoch 111/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7822 - d_real_loss: 0.6684 - d_fake_loss: 0.6651 - d_acc: 0.5870 - kl_divergence: 4.9057
Epoch 112/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7807 - d_real_loss: 0.6696 - d_fake_loss: 0.6658 - d_acc: 0.5857 - kl_divergence: 4.9057
Epoch 113/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7775 - d_real_loss: 0.6675 - d_fake_loss: 0.6644 - d_acc: 0.5910 - kl_divergence: 4.9055
Epoch 114/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7769 - d_real_loss: 0.6685 - d_fake_loss: 0.6653 - d_acc: 0.5905 - kl_divergence: 4.9053
Epoch 115/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7789 - d_real_loss: 0.6702 - d_fake_loss: 0.6670 - d_acc: 0.5863 - kl_divergence: 4.9051
Epoch 116/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7779 - d_real_loss: 0.6703 - d_fake_loss: 0.6663 - d_acc: 0.5839 - kl_divergence: 4.9049
Epoch 117/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7758 - d_real_loss: 0.6702 - d_fake_loss: 0.6677 - d_acc: 0.5863 - kl_divergence: 4.9047
Epoch 118/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7732 - d_real_loss: 0.6710 - d_fake_loss: 0.6677 - d_acc: 0.5838 - kl_divergence: 4.9046
Epoch 119/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7791 - d_real_loss: 0.6707 - d_fake_loss: 0.6673 - d_acc: 0.5832 - kl_divergence: 4.9044
Epoch 120/200
390/391 [============================>.] - ETA: 0s - g_loss: 0.7698 - d_real_loss: 0.6710 - d_fake_loss: 0.6679 - d_acc: 0.5863 - kl_divergence: 4.9043
Saving Model Weights at Epoch 120.

391/391 [==============================] - 22s 55ms/step - g_loss: 0.7698 - d_real_loss: 0.6710 - d_fake_loss: 0.6678 - d_acc: 0.5865 - kl_divergence: 4.9043
Epoch 121/200
391/391 [==============================] - 14s 34ms/step - g_loss: 0.7741 - d_real_loss: 0.6721 - d_fake_loss: 0.6694 - d_acc: 0.5814 - kl_divergence: 4.9041
Epoch 122/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7725 - d_real_loss: 0.6711 - d_fake_loss: 0.6682 - d_acc: 0.5814 - kl_divergence: 4.9041
Epoch 123/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7742 - d_real_loss: 0.6720 - d_fake_loss: 0.6687 - d_acc: 0.5832 - kl_divergence: 4.9038
Epoch 124/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7721 - d_real_loss: 0.6722 - d_fake_loss: 0.6689 - d_acc: 0.5768 - kl_divergence: 4.9038
Epoch 125/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7701 - d_real_loss: 0.6724 - d_fake_loss: 0.6698 - d_acc: 0.5805 - kl_divergence: 4.9037
Epoch 126/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7699 - d_real_loss: 0.6721 - d_fake_loss: 0.6691 - d_acc: 0.5858 - kl_divergence: 4.9036
Epoch 127/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7705 - d_real_loss: 0.6719 - d_fake_loss: 0.6695 - d_acc: 0.5789 - kl_divergence: 4.9033
Epoch 128/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7675 - d_real_loss: 0.6719 - d_fake_loss: 0.6691 - d_acc: 0.5808 - kl_divergence: 4.9033
Epoch 129/200
391/391 [==============================] - 14s 34ms/step - g_loss: 0.7659 - d_real_loss: 0.6719 - d_fake_loss: 0.6693 - d_acc: 0.5799 - kl_divergence: 4.9033
Epoch 130/200
391/391 [==============================] - ETA: 0s - g_loss: 0.7674 - d_real_loss: 0.6723 - d_fake_loss: 0.6701 - d_acc: 0.5823 - kl_divergence: 4.9032
Saving Model Weights at Epoch 130.

391/391 [==============================] - 23s 58ms/step - g_loss: 0.7674 - d_real_loss: 0.6723 - d_fake_loss: 0.6701 - d_acc: 0.5823 - kl_divergence: 4.9032
Epoch 131/200
391/391 [==============================] - 14s 34ms/step - g_loss: 0.7723 - d_real_loss: 0.6736 - d_fake_loss: 0.6709 - d_acc: 0.5759 - kl_divergence: 4.9029
Epoch 132/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7666 - d_real_loss: 0.6727 - d_fake_loss: 0.6705 - d_acc: 0.5761 - kl_divergence: 4.9028
Epoch 133/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7633 - d_real_loss: 0.6729 - d_fake_loss: 0.6705 - d_acc: 0.5796 - kl_divergence: 4.9027
Epoch 134/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7672 - d_real_loss: 0.6732 - d_fake_loss: 0.6708 - d_acc: 0.5788 - kl_divergence: 4.9027
Epoch 135/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7666 - d_real_loss: 0.6741 - d_fake_loss: 0.6715 - d_acc: 0.5784 - kl_divergence: 4.9025
Epoch 136/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7626 - d_real_loss: 0.6735 - d_fake_loss: 0.6711 - d_acc: 0.5755 - kl_divergence: 4.9025
Epoch 137/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7685 - d_real_loss: 0.6735 - d_fake_loss: 0.6716 - d_acc: 0.5766 - kl_divergence: 4.9024
Epoch 138/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7664 - d_real_loss: 0.6733 - d_fake_loss: 0.6707 - d_acc: 0.5800 - kl_divergence: 4.9023
Epoch 139/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7651 - d_real_loss: 0.6742 - d_fake_loss: 0.6714 - d_acc: 0.5770 - kl_divergence: 4.9022
Epoch 140/200
391/391 [==============================] - ETA: 0s - g_loss: 0.7647 - d_real_loss: 0.6740 - d_fake_loss: 0.6723 - d_acc: 0.5775 - kl_divergence: 4.9020
Saving Model Weights at Epoch 140.

391/391 [==============================] - 21s 55ms/step - g_loss: 0.7647 - d_real_loss: 0.6740 - d_fake_loss: 0.6723 - d_acc: 0.5775 - kl_divergence: 4.9020
Epoch 141/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7618 - d_real_loss: 0.6738 - d_fake_loss: 0.6717 - d_acc: 0.5751 - kl_divergence: 4.9019
Epoch 142/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7650 - d_real_loss: 0.6747 - d_fake_loss: 0.6724 - d_acc: 0.5718 - kl_divergence: 4.9017
Epoch 143/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7636 - d_real_loss: 0.6749 - d_fake_loss: 0.6723 - d_acc: 0.5737 - kl_divergence: 4.9017
Epoch 144/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7613 - d_real_loss: 0.6744 - d_fake_loss: 0.6722 - d_acc: 0.5746 - kl_divergence: 4.9017
Epoch 145/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7611 - d_real_loss: 0.6748 - d_fake_loss: 0.6726 - d_acc: 0.5754 - kl_divergence: 4.9017
Epoch 146/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7637 - d_real_loss: 0.6747 - d_fake_loss: 0.6723 - d_acc: 0.5769 - kl_divergence: 4.9017
Epoch 147/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7610 - d_real_loss: 0.6747 - d_fake_loss: 0.6721 - d_acc: 0.5749 - kl_divergence: 4.9016
Epoch 148/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7603 - d_real_loss: 0.6746 - d_fake_loss: 0.6727 - d_acc: 0.5745 - kl_divergence: 4.9015
Epoch 149/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7618 - d_real_loss: 0.6749 - d_fake_loss: 0.6723 - d_acc: 0.5739 - kl_divergence: 4.9014
Epoch 150/200
1/1 [==============================] - 1s 950ms/step g_loss: 0.7634 - d_real_loss: 0.6749 - d_fake_loss: 0.6727 - d_acc: 0.5745 - kl_divergence: 4.90
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 41ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 47ms/step
4/4 [==============================] - 1s 113ms/step
4/4 [==============================] - 0s 34ms/step
Epoch 150: Average (IS): 2.910616636276245 | Std (IS): 0.48887062072753906 | FID Score: 231.23448832572456

Saving Model Weights at Epoch 150.

391/391 [==============================] - 38s 96ms/step - g_loss: 0.7634 - d_real_loss: 0.6749 - d_fake_loss: 0.6727 - d_acc: 0.5747 - kl_divergence: 4.9013
Epoch 151/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7621 - d_real_loss: 0.6745 - d_fake_loss: 0.6720 - d_acc: 0.5757 - kl_divergence: 4.9013
Epoch 152/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7645 - d_real_loss: 0.6747 - d_fake_loss: 0.6724 - d_acc: 0.5757 - kl_divergence: 4.9012
Epoch 153/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7600 - d_real_loss: 0.6739 - d_fake_loss: 0.6716 - d_acc: 0.5765 - kl_divergence: 4.9011
Epoch 154/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7614 - d_real_loss: 0.6744 - d_fake_loss: 0.6728 - d_acc: 0.5754 - kl_divergence: 4.9009
Epoch 155/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7621 - d_real_loss: 0.6744 - d_fake_loss: 0.6718 - d_acc: 0.5760 - kl_divergence: 4.9007
Epoch 156/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7608 - d_real_loss: 0.6741 - d_fake_loss: 0.6720 - d_acc: 0.5771 - kl_divergence: 4.9006
Epoch 157/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7612 - d_real_loss: 0.6747 - d_fake_loss: 0.6725 - d_acc: 0.5739 - kl_divergence: 4.9005
Epoch 158/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7612 - d_real_loss: 0.6741 - d_fake_loss: 0.6722 - d_acc: 0.5758 - kl_divergence: 4.9004
Epoch 159/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7609 - d_real_loss: 0.6745 - d_fake_loss: 0.6727 - d_acc: 0.5777 - kl_divergence: 4.9003
Epoch 160/200
390/391 [============================>.] - ETA: 0s - g_loss: 0.7643 - d_real_loss: 0.6751 - d_fake_loss: 0.6719 - d_acc: 0.5742 - kl_divergence: 4.9002
Saving Model Weights at Epoch 160.

391/391 [==============================] - 23s 59ms/step - g_loss: 0.7646 - d_real_loss: 0.6749 - d_fake_loss: 0.6722 - d_acc: 0.5746 - kl_divergence: 4.9002
Epoch 161/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7621 - d_real_loss: 0.6751 - d_fake_loss: 0.6726 - d_acc: 0.5723 - kl_divergence: 4.9001
Epoch 162/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7589 - d_real_loss: 0.6735 - d_fake_loss: 0.6715 - d_acc: 0.5773 - kl_divergence: 4.8998
Epoch 163/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7606 - d_real_loss: 0.6737 - d_fake_loss: 0.6715 - d_acc: 0.5764 - kl_divergence: 4.8997
Epoch 164/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7588 - d_real_loss: 0.6737 - d_fake_loss: 0.6716 - d_acc: 0.5790 - kl_divergence: 4.8995
Epoch 165/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7641 - d_real_loss: 0.6750 - d_fake_loss: 0.6730 - d_acc: 0.5749 - kl_divergence: 4.8993
Epoch 166/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7605 - d_real_loss: 0.6738 - d_fake_loss: 0.6715 - d_acc: 0.5766 - kl_divergence: 4.8994
Epoch 167/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7610 - d_real_loss: 0.6743 - d_fake_loss: 0.6723 - d_acc: 0.5797 - kl_divergence: 4.8992
Epoch 168/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7611 - d_real_loss: 0.6742 - d_fake_loss: 0.6721 - d_acc: 0.5760 - kl_divergence: 4.8992
Epoch 169/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7611 - d_real_loss: 0.6745 - d_fake_loss: 0.6718 - d_acc: 0.5746 - kl_divergence: 4.8991
Epoch 170/200
391/391 [==============================] - ETA: 0s - g_loss: 0.7602 - d_real_loss: 0.6742 - d_fake_loss: 0.6729 - d_acc: 0.5769 - kl_divergence: 4.8989
Saving Model Weights at Epoch 170.

391/391 [==============================] - 22s 55ms/step - g_loss: 0.7602 - d_real_loss: 0.6742 - d_fake_loss: 0.6729 - d_acc: 0.5769 - kl_divergence: 4.8989
Epoch 171/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7619 - d_real_loss: 0.6753 - d_fake_loss: 0.6726 - d_acc: 0.5733 - kl_divergence: 4.8987
Epoch 172/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7586 - d_real_loss: 0.6744 - d_fake_loss: 0.6722 - d_acc: 0.5757 - kl_divergence: 4.8986
Epoch 173/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7585 - d_real_loss: 0.6747 - d_fake_loss: 0.6726 - d_acc: 0.5739 - kl_divergence: 4.8985
Epoch 174/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7591 - d_real_loss: 0.6751 - d_fake_loss: 0.6729 - d_acc: 0.5730 - kl_divergence: 4.8984
Epoch 175/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7587 - d_real_loss: 0.6750 - d_fake_loss: 0.6729 - d_acc: 0.5737 - kl_divergence: 4.8983
Epoch 176/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7580 - d_real_loss: 0.6748 - d_fake_loss: 0.6728 - d_acc: 0.5759 - kl_divergence: 4.8983
Epoch 177/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7568 - d_real_loss: 0.6754 - d_fake_loss: 0.6737 - d_acc: 0.5749 - kl_divergence: 4.8981
Epoch 178/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7588 - d_real_loss: 0.6754 - d_fake_loss: 0.6734 - d_acc: 0.5740 - kl_divergence: 4.8980
Epoch 179/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7592 - d_real_loss: 0.6751 - d_fake_loss: 0.6729 - d_acc: 0.5781 - kl_divergence: 4.8978
Epoch 180/200
390/391 [============================>.] - ETA: 0s - g_loss: 0.7584 - d_real_loss: 0.6747 - d_fake_loss: 0.6736 - d_acc: 0.5788 - kl_divergence: 4.8977
Saving Model Weights at Epoch 180.

391/391 [==============================] - 21s 55ms/step - g_loss: 0.7582 - d_real_loss: 0.6750 - d_fake_loss: 0.6732 - d_acc: 0.5783 - kl_divergence: 4.8977
Epoch 181/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7585 - d_real_loss: 0.6749 - d_fake_loss: 0.6730 - d_acc: 0.5766 - kl_divergence: 4.8975
Epoch 182/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7565 - d_real_loss: 0.6751 - d_fake_loss: 0.6731 - d_acc: 0.5762 - kl_divergence: 4.8973
Epoch 183/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7560 - d_real_loss: 0.6746 - d_fake_loss: 0.6731 - d_acc: 0.5740 - kl_divergence: 4.8972
Epoch 184/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7583 - d_real_loss: 0.6760 - d_fake_loss: 0.6737 - d_acc: 0.5728 - kl_divergence: 4.8971
Epoch 185/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7569 - d_real_loss: 0.6756 - d_fake_loss: 0.6733 - d_acc: 0.5707 - kl_divergence: 4.8971
Epoch 186/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7561 - d_real_loss: 0.6758 - d_fake_loss: 0.6738 - d_acc: 0.5691 - kl_divergence: 4.8971
Epoch 187/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7571 - d_real_loss: 0.6751 - d_fake_loss: 0.6736 - d_acc: 0.5736 - kl_divergence: 4.8972
Epoch 188/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7575 - d_real_loss: 0.6755 - d_fake_loss: 0.6738 - d_acc: 0.5737 - kl_divergence: 4.8971
Epoch 189/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7585 - d_real_loss: 0.6766 - d_fake_loss: 0.6740 - d_acc: 0.5692 - kl_divergence: 4.8971
Epoch 190/200
391/391 [==============================] - ETA: 0s - g_loss: 0.7588 - d_real_loss: 0.6757 - d_fake_loss: 0.6741 - d_acc: 0.5702 - kl_divergence: 4.8971
Saving Model Weights at Epoch 190.

391/391 [==============================] - 22s 55ms/step - g_loss: 0.7588 - d_real_loss: 0.6757 - d_fake_loss: 0.6741 - d_acc: 0.5702 - kl_divergence: 4.8971
Epoch 191/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7563 - d_real_loss: 0.6750 - d_fake_loss: 0.6734 - d_acc: 0.5732 - kl_divergence: 4.8969
Epoch 192/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7535 - d_real_loss: 0.6750 - d_fake_loss: 0.6731 - d_acc: 0.5756 - kl_divergence: 4.8969
Epoch 193/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7611 - d_real_loss: 0.6755 - d_fake_loss: 0.6735 - d_acc: 0.5747 - kl_divergence: 4.8969
Epoch 194/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7547 - d_real_loss: 0.6755 - d_fake_loss: 0.6736 - d_acc: 0.5715 - kl_divergence: 4.8968
Epoch 195/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7565 - d_real_loss: 0.6756 - d_fake_loss: 0.6738 - d_acc: 0.5718 - kl_divergence: 4.8967
Epoch 196/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7569 - d_real_loss: 0.6753 - d_fake_loss: 0.6738 - d_acc: 0.5740 - kl_divergence: 4.8966
Epoch 197/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7550 - d_real_loss: 0.6754 - d_fake_loss: 0.6732 - d_acc: 0.5692 - kl_divergence: 4.8964
Epoch 198/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7549 - d_real_loss: 0.6753 - d_fake_loss: 0.6735 - d_acc: 0.5729 - kl_divergence: 4.8963
Epoch 199/200
391/391 [==============================] - 14s 35ms/step - g_loss: 0.7559 - d_real_loss: 0.6761 - d_fake_loss: 0.6745 - d_acc: 0.5698 - kl_divergence: 4.8964
Epoch 200/200
1/1 [==============================] - 1s 927ms/step g_loss: 0.7546 - d_real_loss: 0.6758 - d_fake_loss: 0.6740 - d_acc: 0.5709 - kl_divergence: 4.89
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 40ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 32ms/step
1/1 [==============================] - 0s 31ms/step
4/4 [==============================] - 1s 101ms/step
4/4 [==============================] - 0s 33ms/step
Epoch 200: Average (IS): 2.710716485977173 | Std (IS): 0.27350670099258423 | FID Score: 228.8518065378787

Saving Model Weights at Epoch 200.

391/391 [==============================] - 40s 102ms/step - g_loss: 0.7546 - d_real_loss: 0.6758 - d_fake_loss: 0.6740 - d_acc: 0.5709 - kl_divergence: 4.8963

DISPLAYING BEST FID AND INCEPTION SCORES FOR SNGAN

  • For SNGAN, we see that although we tried Spectral Normalization, the FID score for SNGAN is not as low as compared to DCGAN and cDCGAN. This shows that the model did not perform as well.
  • In addition, the KL Divergence for SNGAN is slightly worse than that of cDCGAN, indicating that the distribution for generated data is further away from the real data distribution.
In [33]:
monitor = callbacks[0]

# Extract the best KL Divergence
best_kl_div = min(history.history['kl_divergence'])

# Extract the best FID Score
best_fid = min(monitor.sngan_fid_scores) if monitor.sngan_fid_scores else None

# Extract the best IS Score (average)
best_is_avg = max(is_avg for is_avg, _ in monitor.sngan_is_scores) if monitor.sngan_is_scores else None

# Create a DataFrame to store these best values
sngan_df = pd.DataFrame({
    'Best KL Divergence': [best_kl_div],
    'Best FID': [best_fid],
    'Best IS': [best_is_avg]
})

# Display the DataFrame
sngan_df
Out[33]:
Best KL Divergence Best FID Best IS
0 4.896236 225.471866 2.910617

PLOTTING THE MODEL'S PERFORMANCE OVER TIME

  • From the KL Divergence, we see that the KL Divergence has a sharp initial decrease, suggesting rapid early improvements in the generator. After the initial drop, the KL Divergence flattens out, which could mean that the generator's distribution has become more stable, potentially resembling the real data distribution.
  • For the discriminator accuracy, we see that it starts very high, indicating an initial ability to easily distinguish real from fake. However, it quickly drops, which is a desirable outcome, as it suggests the generator is improving. As the training progresses, the accuracy flattens out at a level that suggests the discriminator is challenged but not entirely outperformed by the generator, which is a positive sign of a balanced adversarial training process.
  • For the losses, shows a steep initial drop, indicating rapid learning, and then settles into a relatively stable state with minor fluctuations. This suggests the generator is consistently producing data that challenges the discriminator. In contrast, for the discriminator losses, they are quite low and remain stable throughout training. This stability is good; it suggests the discriminator is performing its task consistently without overpowering the generator.
In [34]:
plot_model_performance(history)
No description has been provided for this image

LOADING AND TESTING THE GENERATOR WEIGHTS ON SYNTHETIC IMAGES

In [35]:
# Loading and testing the generator's weights
generator.load_weights('modelweights/sngan/epoch_200/generator_weights_epoch_200.h5')
generator.summary()
Model: "SNGAN_Generator"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================
 Noise_Input (InputLayer)       [(None, 128)]        0           []                               
                                                                                                  
 Label_Input (InputLayer)       [(None, 1)]          0           []                               
                                                                                                  
 Noise_Dense (Dense)            (None, 2048)         264192      ['Noise_Input[0][0]']            
                                                                                                  
 Label_Embedding (Embedding)    (None, 1, 10)        100         ['Label_Input[0][0]']            
                                                                                                  
 Noise_ReLU (ReLU)              (None, 2048)         0           ['Noise_Dense[0][0]']            
                                                                                                  
 Label_Dense (Dense)            (None, 1, 16)        176         ['Label_Embedding[0][0]']        
                                                                                                  
 Noise_Reshape (Reshape)        (None, 4, 4, 128)    0           ['Noise_ReLU[0][0]']             
                                                                                                  
 Label_Reshape (Reshape)        (None, 4, 4, 1)      0           ['Label_Dense[0][0]']            
                                                                                                  
 Concatenate (Concatenate)      (None, 4, 4, 129)    0           ['Noise_Reshape[0][0]',          
                                                                  'Label_Reshape[0][0]']          
                                                                                                  
 Conv1 (Conv2DTranspose)        (None, 8, 8, 128)    264320      ['Concatenate[0][0]']            
                                                                                                  
 Conv1_ReLU (ReLU)              (None, 8, 8, 128)    0           ['Conv1[0][0]']                  
                                                                                                  
 Conv2 (Conv2DTranspose)        (None, 16, 16, 128)  262272      ['Conv1_ReLU[0][0]']             
                                                                                                  
 Conv2_ReLU (ReLU)              (None, 16, 16, 128)  0           ['Conv2[0][0]']                  
                                                                                                  
 Conv3 (Conv2DTranspose)        (None, 32, 32, 128)  262272      ['Conv2_ReLU[0][0]']             
                                                                                                  
 Conv3_ReLU (ReLU)              (None, 32, 32, 128)  0           ['Conv3[0][0]']                  
                                                                                                  
 Output (Conv2D)                (None, 32, 32, 3)    3459        ['Conv3_ReLU[0][0]']             
                                                                                                  
==================================================================================================
Total params: 1,056,791
Trainable params: 1,056,791
Non-trainable params: 0
__________________________________________________________________________________________________
In [36]:
# Generate random latent vectors and class labels
latent_vectors = tf.random.normal(shape=(100, LATENT_DIM))
class_labels = tf.reshape(tf.range(10), shape=(10, 1))
class_labels = tf.tile(class_labels, multiples=(1, 10))
class_labels = tf.reshape(class_labels, shape=(100, 1))

# Generate images using the loaded generator
generated_images = generator([latent_vectors, class_labels], training=False)
generated_images = (generated_images + 1) / 2

# Create a dictionary to map class labels to their corresponding names
label_map = {
    0: 'Airplane',
    1: 'Automobile',
    2: 'Bird',
    3: 'Cat',
    4: 'Deer',
    5: 'Dog',
    6: 'Frog',
    7: 'Horse',
    8: 'Ship',
    9: 'Truck'
}

# Create a grid of subplots and display generated images with labels
fig, axes = plt.subplots(10, 10, figsize=(20, 20))
axes = axes.flatten()

for i, ax in enumerate(axes):
    ax.imshow(generated_images[i], cmap='gray')
    ax.set_title(label_map[class_labels[i].numpy().item()], fontsize=16)
    ax.axis('off')

plt.tight_layout()
plt.show()
No description has been provided for this image

MODEL 4 : ACGAN - AUXILLIARY CLASSIFIER GAN¶

ACGAN, Auxilliary Classifier Generative Adversarial Network is basically an extension of the traditional GAN framework that incorporates an auxilliary classifier into the generator and discriminator. ACGANs are used for conditional image generation, where the generated images are conditioned on specific class labels. Hence, we are testing this model as a means to improve the control and diversity of generated samples.

ACGAN

One key feature of ACGAN is its ability to generate images conditioned on specific class labels. When a class label is provided as a condition during generation, the type of image the generator produces can be controlled, allowing the model to generate diverse samples of different classes, making it useful for tasks like image synthesis.

BUILDING THE ACGAN GENERATOR FUNCTION

In [76]:
def create_generator(latent_dim):
    # Foundation for label embedded input
    label_input = Input(shape=(1,), name='Label_Input')
    label_embedding = Embedding(10, 10, name='Label_Embedding')(label_input)
    
    # Linear activation
    label_embedding = Dense(4 * 4, name='Label_Dense')(label_embedding)

    # Reshape to additional channel
    label_embedding = Reshape((4, 4, 1), name='Label_Reshape')(label_embedding)

    # Foundation for 4x4 image input
    noise_input = Input(shape=(latent_dim,), name='Noise_Input')
    noise_dense = Dense(4 * 4 * 128, name='Noise_Dense')(noise_input)
    noise_dense = ReLU(name='Noise_ReLU')(noise_dense)
    noise_reshape = Reshape((4, 4, 128), name='Noise_Reshape')(noise_dense)

    # Concatenate label embedding and image to produce 129-channel output
    concat = Concatenate(name='Concatenate')([noise_reshape, label_embedding])

    # Upsample to 8x8
    conv1 = Conv2DTranspose(128, (4, 4), strides=(2, 2), padding='same', name='Conv1')(concat)
    conv1 = ReLU(name='Conv1_ReLU')(conv1)

    # Upsample to 16x16
    conv2 = Conv2DTranspose(128, (4, 4), strides=(2, 2), padding='same', name='Conv2')(conv1)
    conv2 = ReLU(name='Conv2_ReLU')(conv2)

    # Upsample to 32x32
    conv3 = Conv2DTranspose(128, (4, 4), strides=(2, 2), padding='same', name='Conv3')(conv2)
    conv3 = ReLU(name='Conv3_ReLU')(conv3)

    # Output 32x32x3
    output = Conv2D(3, (3, 3), activation='tanh', padding='same', name='Output')(conv3)

    model = Model(inputs=[noise_input, label_input], outputs=output, name='ACGAN_Generator')

    return model
In [77]:
create_generator(latent_dim=128).summary()
Model: "ACGAN_Generator"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================
 Noise_Input (InputLayer)       [(None, 128)]        0           []                               
                                                                                                  
 Label_Input (InputLayer)       [(None, 1)]          0           []                               
                                                                                                  
 Noise_Dense (Dense)            (None, 2048)         264192      ['Noise_Input[0][0]']            
                                                                                                  
 Label_Embedding (Embedding)    (None, 1, 10)        100         ['Label_Input[0][0]']            
                                                                                                  
 Noise_ReLU (ReLU)              (None, 2048)         0           ['Noise_Dense[0][0]']            
                                                                                                  
 Label_Dense (Dense)            (None, 1, 16)        176         ['Label_Embedding[0][0]']        
                                                                                                  
 Noise_Reshape (Reshape)        (None, 4, 4, 128)    0           ['Noise_ReLU[0][0]']             
                                                                                                  
 Label_Reshape (Reshape)        (None, 4, 4, 1)      0           ['Label_Dense[0][0]']            
                                                                                                  
 Concatenate (Concatenate)      (None, 4, 4, 129)    0           ['Noise_Reshape[0][0]',          
                                                                  'Label_Reshape[0][0]']          
                                                                                                  
 Conv1 (Conv2DTranspose)        (None, 8, 8, 128)    264320      ['Concatenate[0][0]']            
                                                                                                  
 Conv1_ReLU (ReLU)              (None, 8, 8, 128)    0           ['Conv1[0][0]']                  
                                                                                                  
 Conv2 (Conv2DTranspose)        (None, 16, 16, 128)  262272      ['Conv1_ReLU[0][0]']             
                                                                                                  
 Conv2_ReLU (ReLU)              (None, 16, 16, 128)  0           ['Conv2[0][0]']                  
                                                                                                  
 Conv3 (Conv2DTranspose)        (None, 32, 32, 128)  262272      ['Conv2_ReLU[0][0]']             
                                                                                                  
 Conv3_ReLU (ReLU)              (None, 32, 32, 128)  0           ['Conv3[0][0]']                  
                                                                                                  
 Output (Conv2D)                (None, 32, 32, 3)    3459        ['Conv3_ReLU[0][0]']             
                                                                                                  
==================================================================================================
Total params: 1,056,791
Trainable params: 1,056,791
Non-trainable params: 0
__________________________________________________________________________________________________

BUILDING THE ACGAN DISCRIMINATOR FUNCTION

In [78]:
def create_discriminator():
    input_layer = Input(shape=(32, 32, 3), name='Image_Input')

    # Downsample to 16x16
    conv1 = SpectralNormalization(Conv2D(128, kernel_size=3, strides=2, padding='same', name='Conv1'))
    conv1 = conv1(input_layer)
    conv1 = LeakyReLU(alpha=0.2, name='Conv1_Leaky_ReLU')(conv1)

    # Downsample to 8x8
    conv2 = SpectralNormalization(Conv2D(128, kernel_size=3, strides=2, padding='same', name='Conv2'))
    conv2 = conv2(conv1)
    conv2 = LeakyReLU(alpha=0.2, name='Conv2_Leaky_ReLU')(conv2)

    # Downsample to 4x4
    conv3 = SpectralNormalization(Conv2D(128, kernel_size=3, strides=2, padding='same', name='Conv3'))
    conv3 = conv3(conv2)
    conv3 = LeakyReLU(alpha=0.2, name='Conv3_Leaky_ReLU')(conv3)

    # Flatten feature maps
    flat = Flatten(name='Flatten')(conv3)

    # Output layers
    sigmoid_out = Dense(units=1, activation='sigmoid', name='Sigmoid_Output')(flat)
    softmax_out = Dense(units=10, activation='softmax', name='Softmax_Output')(flat)

    model = Model(input_layer, [sigmoid_out, softmax_out], name='ACGAN_Discriminator')

    return model
In [79]:
create_discriminator().summary()
Model: "ACGAN_Discriminator"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================
 Image_Input (InputLayer)       [(None, 32, 32, 3)]  0           []                               
                                                                                                  
 spectral_normalization_39 (Spe  (None, 16, 16, 128)  3712       ['Image_Input[0][0]']            
 ctralNormalization)                                                                              
                                                                                                  
 Conv1_Leaky_ReLU (LeakyReLU)   (None, 16, 16, 128)  0           ['spectral_normalization_39[0][0]
                                                                 ']                               
                                                                                                  
 spectral_normalization_40 (Spe  (None, 8, 8, 128)   147712      ['Conv1_Leaky_ReLU[0][0]']       
 ctralNormalization)                                                                              
                                                                                                  
 Conv2_Leaky_ReLU (LeakyReLU)   (None, 8, 8, 128)    0           ['spectral_normalization_40[0][0]
                                                                 ']                               
                                                                                                  
 spectral_normalization_41 (Spe  (None, 4, 4, 128)   147712      ['Conv2_Leaky_ReLU[0][0]']       
 ctralNormalization)                                                                              
                                                                                                  
 Conv3_Leaky_ReLU (LeakyReLU)   (None, 4, 4, 128)    0           ['spectral_normalization_41[0][0]
                                                                 ']                               
                                                                                                  
 Flatten (Flatten)              (None, 2048)         0           ['Conv3_Leaky_ReLU[0][0]']       
                                                                                                  
 Sigmoid_Output (Dense)         (None, 1)            2049        ['Flatten[0][0]']                
                                                                                                  
 Softmax_Output (Dense)         (None, 10)           20490       ['Flatten[0][0]']                
                                                                                                  
==================================================================================================
Total params: 321,675
Trainable params: 321,291
Non-trainable params: 384
__________________________________________________________________________________________________

BUILDING THE TRAINING FUNCTIONS AND CLASSES FOR ACGAN

In [80]:
class ACGAN(Model):
    def __init__(self, generator, discriminator, latent_dim):
        super(ACGAN, self).__init__()
        self.generator = generator
        self.discriminator = discriminator
        self.latent_dim = latent_dim

    def compile(self, d_optimizer, g_optimizer, loss_1, loss_2):
        super(ACGAN, self).compile()
        self.d_optimizer = d_optimizer
        self.g_optimizer = g_optimizer
        self.loss_1 = loss_1 # disc and gen loss
        self.loss_2 = loss_2 # auxiliary loss
        self.g_loss_metric = keras.metrics.Mean(name='g_loss')
        self.d_loss_metric = keras.metrics.Mean(name='d_loss')  # disc bce loss
        self.aux_loss_metric = keras.metrics.Mean(name='aux_loss')  # disc cce loss
        self.d_acc_metric = keras.metrics.BinaryAccuracy(name='d_acc')
        self.kl_metric = keras.metrics.KLDivergence()

    @property
    def metrics(self):
        return [self.g_loss_metric, self.d_loss_metric, self.aux_loss_metric, self.d_acc_metric]

    def train_step(self, data):
        real_images, class_labels = data
        class_labels = tf.cast(class_labels, tf.float32)
        batch_size = tf.shape(real_images)[0]

        random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))
        generated_images = self.generator([random_latent_vectors, class_labels], training=True)

        # freeze generator
        self.discriminator.trainable = True
        self.generator.trainable = False

        with tf.GradientTape() as disc_tape:
            real_pred, real_aux = self.discriminator(real_images, training=True)
            fake_pred, fake_aux = self.discriminator(generated_images, training=True)

            # discriminator loss
            d_loss_1 = self.loss_1(tf.ones_like(real_pred), real_pred)
            d_loss_2 = self.loss_1(tf.zeros_like(fake_pred), fake_pred)
            d_loss = d_loss_1 + d_loss_2
            # auxiliary loss
            aux_loss = self.loss_2(class_labels, fake_aux)
            # total discriminator loss
            d_loss += aux_loss

        # discriminator gradients
        d_grads = disc_tape.gradient(d_loss, self.discriminator.trainable_weights)
        # update discriminator
        self.d_optimizer.apply_gradients(zip(d_grads, self.discriminator.trainable_weights))

        # generator loss
        random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))

        # freeze discriminator
        self.discriminator.trainable = False
        self.generator.trainable = True

        with tf.GradientTape() as gen_tape:
            generated_images = self.generator([random_latent_vectors, class_labels], training=True)
            fake_pred, fake_aux = self.discriminator(generated_images, training=True)
            g_loss_1 = self.loss_1(tf.ones_like(fake_pred), fake_pred)
            g_loss_2 = self.loss_2(class_labels, fake_aux)
            g_loss = g_loss_1 + g_loss_2

        # generator gradients
        g_grads = gen_tape.gradient(g_loss, self.generator.trainable_weights)
        # update generator
        self.g_optimizer.apply_gradients(zip(g_grads, self.generator.trainable_weights))

        # update metrics
        self.g_loss_metric.update_state(g_loss)
        self.d_loss_metric.update_state(d_loss)
        self.aux_loss_metric.update_state(aux_loss)
        self.d_acc_metric.update_state(tf.ones_like(real_pred), real_pred)
        
        return {
            'g_loss': self.g_loss_metric.result(),
            'd_loss': self.d_loss_metric.result(),
            'aux_loss': self.aux_loss_metric.result(),
            'd_acc': self.d_acc_metric.result(),
            'kl_divergence': self.kl_metric.result()
        }
In [81]:
class GANMonitor(Callback):
    def __init__(self, latent_dim, label_map):
        self.latent_dim = latent_dim
        self.label_map = label_map
        self.acgan_fid_scores = []
        self.acgan_is_scores = []

    def on_epoch_end(self, epoch, logs=None):
        latent_vectors = tf.random.normal(shape=(100, self.latent_dim))
        class_labels = tf.reshape(tf.range(10), shape=(10, 1))
        class_labels = tf.tile(class_labels, multiples=(1, 10))
        class_labels = tf.reshape(class_labels, shape=(100, 1))

        generated_images = self.model.generator([latent_vectors, class_labels], training=False)
        generated_images = (generated_images + 1) / 2
        
        if not os.path.exists('modelweights/acgan'):
            os.makedirs('modelweights/acgan')

        if not os.path.exists('images/acgan_images'):
            os.makedirs('images/acgan_images')
            
        if (epoch + 1) % 50 == 0:
            # Calculate FID and IS
            is_avg, is_std = calculate_inception_score(generated_images)
            fid = calculate_fid(generated_images)
            
            # Append metrics to lists
            self.acgan_fid_scores.append(fid)
            self.acgan_is_scores.append((is_avg, is_std))
            
            print(f'Epoch {epoch + 1}: Average (IS): {is_avg} | Std (IS): {is_std} | FID Score: {fid}')
            
        if (epoch + 1) % 10 == 0:
            if not os.path.exists(f'./modelweights/acgan/epoch_{epoch + 1}'):
                os.makedirs(f'./modelweights/acgan/epoch_{epoch + 1}')
                self.model.generator.save_weights(f'./modelweights/acgan/epoch_{epoch + 1}/generator_weights_epoch_{epoch + 1}.h5')
                self.model.discriminator.save_weights(f'./modelweights/acgan/epoch_{epoch + 1}/discriminator_weights_epoch_{epoch + 1}.h5')
                print(f'\nSaving Model Weights at Epoch {epoch + 1}.\n')

            fig, axes = plt.subplots(10, 10, figsize=(20, 20))
            axes = axes.flatten()

            for i, ax in enumerate(axes):
                ax.imshow(generated_images[i])
                ax.set_title(self.label_map[class_labels[i].numpy().item()], fontsize=16)
                ax.axis('off')

            plt.tight_layout()
            plt.savefig(f'./images/acgan_images/generated_img_{epoch + 1}.png')
            plt.close()
In [82]:
EPOCHS = 200
LATENT_DIM = 128    
LEARNING_RATE = 2e-4
BETA_1 = 0.5
LABEL_SMOOTHING = 0.1

label_map = {
    0: 'Airplane',
    1: 'Automobile',
    2: 'Bird',
    3: 'Cat',
    4: 'Deer',
    5: 'Dog',
    6: 'Frog',
    7: 'Horse',
    8: 'Ship',
    9: 'Truck'
}

callbacks = [GANMonitor(LATENT_DIM, label_map)]

generator = create_generator(LATENT_DIM)
discriminator = create_discriminator()
acgan = ACGAN(generator, discriminator, latent_dim=LATENT_DIM)
acgan.compile(
    g_optimizer=Adam(learning_rate=LEARNING_RATE, beta_1=BETA_1),
    d_optimizer=Adam(learning_rate=LEARNING_RATE, beta_1=BETA_1),
    loss_1=BinaryCrossentropy(label_smoothing=LABEL_SMOOTHING),
    loss_2=SparseCategoricalCrossentropy()
)
In [83]:
history = acgan.fit(dataset, epochs=EPOCHS, callbacks=callbacks, use_multiprocessing=True)
Epoch 1/200
391/391 [==============================] - 15s 33ms/step - g_loss: 4.0498 - d_loss: 3.1857 - aux_loss: 2.3076 - d_acc: 0.8450 - kl_divergence: 0.0000e+00
Epoch 2/200
391/391 [==============================] - 13s 32ms/step - g_loss: 4.3621 - d_loss: 3.0696 - aux_loss: 2.3052 - d_acc: 0.8489 - kl_divergence: 0.0000e+00
Epoch 3/200
391/391 [==============================] - 13s 32ms/step - g_loss: 4.4807 - d_loss: 2.9905 - aux_loss: 2.3054 - d_acc: 0.8857 - kl_divergence: 0.0000e+00
Epoch 4/200
391/391 [==============================] - 13s 32ms/step - g_loss: 4.0874 - d_loss: 3.1909 - aux_loss: 2.3074 - d_acc: 0.8119 - kl_divergence: 0.0000e+00
Epoch 5/200
391/391 [==============================] - 13s 32ms/step - g_loss: 3.8019 - d_loss: 3.2663 - aux_loss: 2.3014 - d_acc: 0.8035 - kl_divergence: 0.0000e+00
Epoch 6/200
391/391 [==============================] - 13s 32ms/step - g_loss: 2.5440 - d_loss: 2.3394 - aux_loss: 1.2589 - d_acc: 0.7666 - kl_divergence: 0.0000e+00
Epoch 7/200
391/391 [==============================] - 13s 32ms/step - g_loss: 1.0654 - d_loss: 1.2452 - aux_loss: 0.0506 - d_acc: 0.7084 - kl_divergence: 0.0000e+00
Epoch 8/200
391/391 [==============================] - 13s 32ms/step - g_loss: 0.9970 - d_loss: 1.2582 - aux_loss: 0.0242 - d_acc: 0.6835 - kl_divergence: 0.0000e+00
Epoch 9/200
391/391 [==============================] - 13s 32ms/step - g_loss: 1.0040 - d_loss: 1.2597 - aux_loss: 0.0187 - d_acc: 0.6670 - kl_divergence: 0.0000e+00
Epoch 10/200
389/391 [============================>.] - ETA: 0s - g_loss: 1.0205 - d_loss: 1.2718 - aux_loss: 0.0152 - d_acc: 0.6507 - kl_divergence: 0.0000e+00
Saving Model Weights at Epoch 10.

391/391 [==============================] - 21s 55ms/step - g_loss: 1.0200 - d_loss: 1.2721 - aux_loss: 0.0151 - d_acc: 0.6500 - kl_divergence: 0.0000e+00
Epoch 11/200
391/391 [==============================] - 13s 32ms/step - g_loss: 1.0080 - d_loss: 1.2714 - aux_loss: 0.0114 - d_acc: 0.6585 - kl_divergence: 0.0000e+00
Epoch 12/200
391/391 [==============================] - 13s 32ms/step - g_loss: 1.0197 - d_loss: 1.2497 - aux_loss: 0.0124 - d_acc: 0.6688 - kl_divergence: 0.0000e+00
Epoch 13/200
391/391 [==============================] - 13s 32ms/step - g_loss: 1.0076 - d_loss: 1.2675 - aux_loss: 0.0123 - d_acc: 0.6493 - kl_divergence: 0.0000e+00
Epoch 14/200
391/391 [==============================] - 13s 33ms/step - g_loss: 1.0460 - d_loss: 1.2216 - aux_loss: 0.0114 - d_acc: 0.6784 - kl_divergence: 0.0000e+00
Epoch 15/200
391/391 [==============================] - 13s 33ms/step - g_loss: 1.0678 - d_loss: 1.2229 - aux_loss: 0.0128 - d_acc: 0.6782 - kl_divergence: 0.0000e+00
Epoch 16/200
391/391 [==============================] - 13s 33ms/step - g_loss: 1.0827 - d_loss: 1.2087 - aux_loss: 0.0098 - d_acc: 0.6812 - kl_divergence: 0.0000e+00
Epoch 17/200
391/391 [==============================] - 13s 33ms/step - g_loss: 1.1328 - d_loss: 1.1872 - aux_loss: 0.0108 - d_acc: 0.6883 - kl_divergence: 0.0000e+00
Epoch 18/200
391/391 [==============================] - 13s 33ms/step - g_loss: 1.0883 - d_loss: 1.1914 - aux_loss: 0.0089 - d_acc: 0.6883 - kl_divergence: 0.0000e+00
Epoch 19/200
391/391 [==============================] - 13s 33ms/step - g_loss: 1.0730 - d_loss: 1.1938 - aux_loss: 0.0079 - d_acc: 0.6922 - kl_divergence: 0.0000e+00
Epoch 20/200
391/391 [==============================] - ETA: 0s - g_loss: 1.0523 - d_loss: 1.2083 - aux_loss: 0.0066 - d_acc: 0.6742 - kl_divergence: 0.0000e+00
Saving Model Weights at Epoch 20.

391/391 [==============================] - 22s 55ms/step - g_loss: 1.0523 - d_loss: 1.2083 - aux_loss: 0.0066 - d_acc: 0.6742 - kl_divergence: 0.0000e+00
Epoch 21/200
391/391 [==============================] - 13s 33ms/step - g_loss: 1.0274 - d_loss: 1.2254 - aux_loss: 0.0072 - d_acc: 0.6654 - kl_divergence: 0.0000e+00
Epoch 22/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.9581 - d_loss: 1.2334 - aux_loss: 0.0059 - d_acc: 0.6696 - kl_divergence: 0.0000e+00
Epoch 23/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.9466 - d_loss: 1.2467 - aux_loss: 0.0054 - d_acc: 0.6664 - kl_divergence: 0.0000e+00
Epoch 24/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.9192 - d_loss: 1.2627 - aux_loss: 0.0054 - d_acc: 0.6509 - kl_divergence: 0.0000e+00
Epoch 25/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.8952 - d_loss: 1.2735 - aux_loss: 0.0052 - d_acc: 0.6452 - kl_divergence: 0.0000e+00
Epoch 26/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.8539 - d_loss: 1.2723 - aux_loss: 0.0054 - d_acc: 0.6662 - kl_divergence: 0.0000e+00
Epoch 27/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.8765 - d_loss: 1.2863 - aux_loss: 0.0045 - d_acc: 0.6391 - kl_divergence: 0.0000e+00
Epoch 28/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.8550 - d_loss: 1.2958 - aux_loss: 0.0044 - d_acc: 0.6380 - kl_divergence: 0.0000e+00
Epoch 29/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.8378 - d_loss: 1.2972 - aux_loss: 0.0038 - d_acc: 0.6353 - kl_divergence: 0.0000e+00
Epoch 30/200
390/391 [============================>.] - ETA: 0s - g_loss: 0.8348 - d_loss: 1.3123 - aux_loss: 0.0038 - d_acc: 0.6277 - kl_divergence: 0.0000e+00
Saving Model Weights at Epoch 30.

391/391 [==============================] - 21s 53ms/step - g_loss: 0.8348 - d_loss: 1.3124 - aux_loss: 0.0038 - d_acc: 0.6276 - kl_divergence: 0.0000e+00
Epoch 31/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.8283 - d_loss: 1.3151 - aux_loss: 0.0035 - d_acc: 0.6159 - kl_divergence: 0.0000e+00
Epoch 32/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.8302 - d_loss: 1.3227 - aux_loss: 0.0031 - d_acc: 0.6084 - kl_divergence: 0.0000e+00
Epoch 33/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.8225 - d_loss: 1.3238 - aux_loss: 0.0032 - d_acc: 0.6075 - kl_divergence: 0.0000e+00
Epoch 34/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.8048 - d_loss: 1.3261 - aux_loss: 0.0032 - d_acc: 0.6085 - kl_divergence: 0.0000e+00
Epoch 35/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7984 - d_loss: 1.3333 - aux_loss: 0.0029 - d_acc: 0.6050 - kl_divergence: 0.0000e+00
Epoch 36/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.8071 - d_loss: 1.3373 - aux_loss: 0.0026 - d_acc: 0.5920 - kl_divergence: 0.0000e+00
Epoch 37/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8076 - d_loss: 1.3354 - aux_loss: 0.0026 - d_acc: 0.5959 - kl_divergence: 0.0000e+00
Epoch 38/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8084 - d_loss: 1.3309 - aux_loss: 0.0026 - d_acc: 0.5994 - kl_divergence: 0.0000e+00
Epoch 39/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.8040 - d_loss: 1.3353 - aux_loss: 0.0026 - d_acc: 0.5954 - kl_divergence: 0.0000e+00
Epoch 40/200
391/391 [==============================] - ETA: 0s - g_loss: 0.8028 - d_loss: 1.3359 - aux_loss: 0.0026 - d_acc: 0.5926 - kl_divergence: 0.0000e+00
Saving Model Weights at Epoch 40.

391/391 [==============================] - 22s 55ms/step - g_loss: 0.8028 - d_loss: 1.3359 - aux_loss: 0.0026 - d_acc: 0.5926 - kl_divergence: 0.0000e+00
Epoch 41/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.8077 - d_loss: 1.3427 - aux_loss: 0.0025 - d_acc: 0.5886 - kl_divergence: 0.0000e+00
Epoch 42/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7895 - d_loss: 1.3338 - aux_loss: 0.0022 - d_acc: 0.5952 - kl_divergence: 0.0000e+00
Epoch 43/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7908 - d_loss: 1.3365 - aux_loss: 0.0022 - d_acc: 0.5897 - kl_divergence: 0.0000e+00
Epoch 44/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7953 - d_loss: 1.3394 - aux_loss: 0.0023 - d_acc: 0.5917 - kl_divergence: 0.0000e+00
Epoch 45/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7908 - d_loss: 1.3424 - aux_loss: 0.0022 - d_acc: 0.5840 - kl_divergence: 0.0000e+00
Epoch 46/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7866 - d_loss: 1.3438 - aux_loss: 0.0020 - d_acc: 0.5830 - kl_divergence: 0.0000e+00
Epoch 47/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7826 - d_loss: 1.3442 - aux_loss: 0.0020 - d_acc: 0.5830 - kl_divergence: 0.0000e+00
Epoch 48/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7953 - d_loss: 1.3490 - aux_loss: 0.0021 - d_acc: 0.5756 - kl_divergence: 0.0000e+00
Epoch 49/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7850 - d_loss: 1.3434 - aux_loss: 0.0018 - d_acc: 0.5829 - kl_divergence: 0.0000e+00
Epoch 50/200
1/1 [==============================] - 1s 1s/steps - g_loss: 0.7885 - d_loss: 1.3425 - aux_loss: 0.0020 - d_acc: 0.5831 - kl_divergence: 0.0000e+
1/1 [==============================] - 0s 34ms/step
1/1 [==============================] - 0s 33ms/step
1/1 [==============================] - 0s 37ms/step
1/1 [==============================] - 0s 37ms/step
1/1 [==============================] - 0s 41ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 48ms/step
1/1 [==============================] - 0s 38ms/step
1/1 [==============================] - 0s 38ms/step
4/4 [==============================] - 1s 142ms/step
4/4 [==============================] - 0s 29ms/step
Epoch 50: Average (IS): 2.2476816177368164 | Std (IS): 0.2485819309949875 | FID Score: 243.65513975463205

Saving Model Weights at Epoch 50.

391/391 [==============================] - 38s 97ms/step - g_loss: 0.7885 - d_loss: 1.3425 - aux_loss: 0.0020 - d_acc: 0.5831 - kl_divergence: 0.0000e+00
Epoch 51/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7883 - d_loss: 1.3464 - aux_loss: 0.0021 - d_acc: 0.5775 - kl_divergence: 0.0000e+00
Epoch 52/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7820 - d_loss: 1.3401 - aux_loss: 0.0018 - d_acc: 0.5858 - kl_divergence: 0.0000e+00
Epoch 53/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7936 - d_loss: 1.3413 - aux_loss: 0.0018 - d_acc: 0.5826 - kl_divergence: 0.0000e+00
Epoch 54/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7912 - d_loss: 1.3381 - aux_loss: 0.0018 - d_acc: 0.5879 - kl_divergence: 0.0000e+00
Epoch 55/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7893 - d_loss: 1.3370 - aux_loss: 0.0018 - d_acc: 0.5856 - kl_divergence: 0.0000e+00
Epoch 56/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7938 - d_loss: 1.3372 - aux_loss: 0.0017 - d_acc: 0.5852 - kl_divergence: 0.0000e+00
Epoch 57/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7926 - d_loss: 1.3362 - aux_loss: 0.0019 - d_acc: 0.5875 - kl_divergence: 0.0000e+00
Epoch 58/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7968 - d_loss: 1.3350 - aux_loss: 0.0016 - d_acc: 0.5874 - kl_divergence: 0.0000e+00
Epoch 59/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7908 - d_loss: 1.3357 - aux_loss: 0.0016 - d_acc: 0.5898 - kl_divergence: 0.0000e+00
Epoch 60/200
390/391 [============================>.] - ETA: 0s - g_loss: 0.7898 - d_loss: 1.3372 - aux_loss: 0.0016 - d_acc: 0.5915 - kl_divergence: 0.0000e+00
Saving Model Weights at Epoch 60.

391/391 [==============================] - 22s 57ms/step - g_loss: 0.7896 - d_loss: 1.3372 - aux_loss: 0.0016 - d_acc: 0.5915 - kl_divergence: 0.0000e+00
Epoch 61/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7802 - d_loss: 1.3353 - aux_loss: 0.0014 - d_acc: 0.5965 - kl_divergence: 0.0000e+00
Epoch 62/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7841 - d_loss: 1.3355 - aux_loss: 0.0014 - d_acc: 0.5934 - kl_divergence: 0.0000e+00
Epoch 63/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7826 - d_loss: 1.3388 - aux_loss: 0.0016 - d_acc: 0.5907 - kl_divergence: 0.0000e+00
Epoch 64/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7831 - d_loss: 1.3376 - aux_loss: 0.0014 - d_acc: 0.5923 - kl_divergence: 0.0000e+00
Epoch 65/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7852 - d_loss: 1.3367 - aux_loss: 0.0014 - d_acc: 0.5889 - kl_divergence: 0.0000e+00
Epoch 66/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7811 - d_loss: 1.3388 - aux_loss: 0.0018 - d_acc: 0.5894 - kl_divergence: 0.0000e+00
Epoch 67/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7773 - d_loss: 1.3372 - aux_loss: 0.0014 - d_acc: 0.5967 - kl_divergence: 0.0000e+00
Epoch 68/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7783 - d_loss: 1.3392 - aux_loss: 0.0015 - d_acc: 0.5901 - kl_divergence: 0.0000e+00
Epoch 69/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7805 - d_loss: 1.3401 - aux_loss: 0.0017 - d_acc: 0.5869 - kl_divergence: 0.0000e+00
Epoch 70/200
391/391 [==============================] - ETA: 0s - g_loss: 0.7803 - d_loss: 1.3377 - aux_loss: 0.0014 - d_acc: 0.5908 - kl_divergence: 0.0000e+00
Saving Model Weights at Epoch 70.

391/391 [==============================] - 21s 54ms/step - g_loss: 0.7803 - d_loss: 1.3377 - aux_loss: 0.0014 - d_acc: 0.5908 - kl_divergence: 0.0000e+00
Epoch 71/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7804 - d_loss: 1.3385 - aux_loss: 0.0017 - d_acc: 0.5909 - kl_divergence: 0.0000e+00
Epoch 72/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7768 - d_loss: 1.3376 - aux_loss: 0.0014 - d_acc: 0.5917 - kl_divergence: 0.0000e+00
Epoch 73/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7675 - d_loss: 1.3368 - aux_loss: 0.0015 - d_acc: 0.5937 - kl_divergence: 0.0000e+00
Epoch 74/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7629 - d_loss: 1.3408 - aux_loss: 0.0015 - d_acc: 0.5872 - kl_divergence: 0.0000e+00
Epoch 75/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7665 - d_loss: 1.3437 - aux_loss: 0.0018 - d_acc: 0.5798 - kl_divergence: 0.0000e+00
Epoch 76/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7704 - d_loss: 1.3435 - aux_loss: 0.0013 - d_acc: 0.5824 - kl_divergence: 0.0000e+00
Epoch 77/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7681 - d_loss: 1.3432 - aux_loss: 0.0015 - d_acc: 0.5803 - kl_divergence: 0.0000e+00
Epoch 78/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7665 - d_loss: 1.3439 - aux_loss: 0.0014 - d_acc: 0.5832 - kl_divergence: 0.0000e+00
Epoch 79/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7721 - d_loss: 1.3448 - aux_loss: 0.0016 - d_acc: 0.5786 - kl_divergence: 0.0000e+00
Epoch 80/200
391/391 [==============================] - ETA: 0s - g_loss: 0.7676 - d_loss: 1.3468 - aux_loss: 0.0015 - d_acc: 0.5768 - kl_divergence: 0.0000e+00
Saving Model Weights at Epoch 80.

391/391 [==============================] - 22s 57ms/step - g_loss: 0.7676 - d_loss: 1.3468 - aux_loss: 0.0015 - d_acc: 0.5768 - kl_divergence: 0.0000e+00
Epoch 81/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7679 - d_loss: 1.3445 - aux_loss: 0.0014 - d_acc: 0.5815 - kl_divergence: 0.0000e+00
Epoch 82/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7655 - d_loss: 1.3449 - aux_loss: 0.0015 - d_acc: 0.5833 - kl_divergence: 0.0000e+00
Epoch 83/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7648 - d_loss: 1.3492 - aux_loss: 0.0016 - d_acc: 0.5744 - kl_divergence: 0.0000e+00
Epoch 84/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7670 - d_loss: 1.3447 - aux_loss: 0.0017 - d_acc: 0.5807 - kl_divergence: 0.0000e+00
Epoch 85/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7626 - d_loss: 1.3467 - aux_loss: 0.0014 - d_acc: 0.5791 - kl_divergence: 0.0000e+00
Epoch 86/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7621 - d_loss: 1.3479 - aux_loss: 0.0015 - d_acc: 0.5798 - kl_divergence: 0.0000e+00
Epoch 87/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7626 - d_loss: 1.3521 - aux_loss: 0.0013 - d_acc: 0.5708 - kl_divergence: 0.0000e+00
Epoch 88/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7591 - d_loss: 1.3496 - aux_loss: 0.0014 - d_acc: 0.5747 - kl_divergence: 0.0000e+00
Epoch 89/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7596 - d_loss: 1.3512 - aux_loss: 0.0019 - d_acc: 0.5755 - kl_divergence: 0.0000e+00
Epoch 90/200
391/391 [==============================] - ETA: 0s - g_loss: 0.7524 - d_loss: 1.3540 - aux_loss: 0.0014 - d_acc: 0.5710 - kl_divergence: 0.0000e+00
Saving Model Weights at Epoch 90.

391/391 [==============================] - 21s 53ms/step - g_loss: 0.7524 - d_loss: 1.3540 - aux_loss: 0.0014 - d_acc: 0.5710 - kl_divergence: 0.0000e+00
Epoch 91/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7507 - d_loss: 1.3541 - aux_loss: 0.0017 - d_acc: 0.5741 - kl_divergence: 0.0000e+00
Epoch 92/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7523 - d_loss: 1.3564 - aux_loss: 0.0013 - d_acc: 0.5658 - kl_divergence: 0.0000e+00
Epoch 93/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7507 - d_loss: 1.3554 - aux_loss: 0.0014 - d_acc: 0.5693 - kl_divergence: 0.0000e+00
Epoch 94/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7491 - d_loss: 1.3586 - aux_loss: 0.0015 - d_acc: 0.5648 - kl_divergence: 0.0000e+00
Epoch 95/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7462 - d_loss: 1.3587 - aux_loss: 0.0015 - d_acc: 0.5682 - kl_divergence: 0.0000e+00
Epoch 96/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7460 - d_loss: 1.3622 - aux_loss: 0.0014 - d_acc: 0.5575 - kl_divergence: 0.0000e+00
Epoch 97/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7404 - d_loss: 1.3626 - aux_loss: 0.0011 - d_acc: 0.5613 - kl_divergence: 0.0000e+00
Epoch 98/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7433 - d_loss: 1.3609 - aux_loss: 0.0014 - d_acc: 0.5619 - kl_divergence: 0.0000e+00
Epoch 99/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7460 - d_loss: 1.3632 - aux_loss: 0.0013 - d_acc: 0.5552 - kl_divergence: 0.0000e+00
Epoch 100/200
1/1 [==============================] - 1s 939ms/step g_loss: 0.7416 - d_loss: 1.3618 - aux_loss: 0.0014 - d_acc: 0.5656 - kl_divergence: 0.0000e+
1/1 [==============================] - 0s 40ms/step
1/1 [==============================] - 0s 32ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 37ms/step
1/1 [==============================] - 0s 33ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 43ms/step
1/1 [==============================] - 0s 36ms/step
1/1 [==============================] - 0s 31ms/step
4/4 [==============================] - 1s 88ms/step
4/4 [==============================] - 0s 27ms/step
Epoch 100: Average (IS): 2.602602481842041 | Std (IS): 0.3533141613006592 | FID Score: 235.17948536228246

Saving Model Weights at Epoch 100.

391/391 [==============================] - 38s 98ms/step - g_loss: 0.7416 - d_loss: 1.3618 - aux_loss: 0.0014 - d_acc: 0.5656 - kl_divergence: 0.0000e+00
Epoch 101/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7436 - d_loss: 1.3624 - aux_loss: 0.0012 - d_acc: 0.5593 - kl_divergence: 0.0000e+00
Epoch 102/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7423 - d_loss: 1.3656 - aux_loss: 0.0013 - d_acc: 0.5539 - kl_divergence: 0.0000e+00
Epoch 103/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7392 - d_loss: 1.3645 - aux_loss: 0.0015 - d_acc: 0.5584 - kl_divergence: 0.0000e+00
Epoch 104/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7420 - d_loss: 1.3634 - aux_loss: 0.0011 - d_acc: 0.5529 - kl_divergence: 0.0000e+00
Epoch 105/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7420 - d_loss: 1.3639 - aux_loss: 0.0012 - d_acc: 0.5552 - kl_divergence: 0.0000e+00
Epoch 106/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7441 - d_loss: 1.3614 - aux_loss: 0.0010 - d_acc: 0.5624 - kl_divergence: 0.0000e+00
Epoch 107/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7446 - d_loss: 1.3607 - aux_loss: 0.0010 - d_acc: 0.5609 - kl_divergence: 0.0000e+00
Epoch 108/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7462 - d_loss: 1.3581 - aux_loss: 0.0011 - d_acc: 0.5668 - kl_divergence: 0.0000e+00
Epoch 109/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7466 - d_loss: 1.3605 - aux_loss: 0.0012 - d_acc: 0.5601 - kl_divergence: 0.0000e+00
Epoch 110/200
391/391 [==============================] - ETA: 0s - g_loss: 0.7461 - d_loss: 1.3597 - aux_loss: 0.0011 - d_acc: 0.5641 - kl_divergence: 0.0000e+00
Saving Model Weights at Epoch 110.

391/391 [==============================] - 21s 55ms/step - g_loss: 0.7461 - d_loss: 1.3597 - aux_loss: 0.0011 - d_acc: 0.5641 - kl_divergence: 0.0000e+00
Epoch 111/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7471 - d_loss: 1.3556 - aux_loss: 0.0011 - d_acc: 0.5670 - kl_divergence: 0.0000e+00
Epoch 112/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7493 - d_loss: 1.3571 - aux_loss: 0.0013 - d_acc: 0.5661 - kl_divergence: 0.0000e+00
Epoch 113/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7506 - d_loss: 1.3573 - aux_loss: 0.0012 - d_acc: 0.5627 - kl_divergence: 0.0000e+00
Epoch 114/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7469 - d_loss: 1.3565 - aux_loss: 0.0011 - d_acc: 0.5662 - kl_divergence: 0.0000e+00
Epoch 115/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7459 - d_loss: 1.3573 - aux_loss: 0.0013 - d_acc: 0.5652 - kl_divergence: 0.0000e+00
Epoch 116/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7450 - d_loss: 1.3568 - aux_loss: 0.0011 - d_acc: 0.5658 - kl_divergence: 0.0000e+00
Epoch 117/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7464 - d_loss: 1.3581 - aux_loss: 0.0014 - d_acc: 0.5668 - kl_divergence: 0.0000e+00
Epoch 118/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7453 - d_loss: 1.3593 - aux_loss: 9.7745e-04 - d_acc: 0.5626 - kl_divergence: 0.0000e+00
Epoch 119/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7435 - d_loss: 1.3600 - aux_loss: 8.9505e-04 - d_acc: 0.5629 - kl_divergence: 0.0000e+00
Epoch 120/200
389/391 [============================>.] - ETA: 0s - g_loss: 0.7454 - d_loss: 1.3570 - aux_loss: 0.0011 - d_acc: 0.5620 - kl_divergence: 0.0000e+00
Saving Model Weights at Epoch 120.

391/391 [==============================] - 21s 54ms/step - g_loss: 0.7454 - d_loss: 1.3571 - aux_loss: 0.0011 - d_acc: 0.5620 - kl_divergence: 0.0000e+00
Epoch 121/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7451 - d_loss: 1.3576 - aux_loss: 0.0012 - d_acc: 0.5628 - kl_divergence: 0.0000e+00
Epoch 122/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7458 - d_loss: 1.3575 - aux_loss: 0.0011 - d_acc: 0.5659 - kl_divergence: 0.0000e+00
Epoch 123/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7431 - d_loss: 1.3589 - aux_loss: 0.0010 - d_acc: 0.5610 - kl_divergence: 0.0000e+00
Epoch 124/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7391 - d_loss: 1.3605 - aux_loss: 0.0010 - d_acc: 0.5579 - kl_divergence: 0.0000e+00
Epoch 125/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7393 - d_loss: 1.3636 - aux_loss: 0.0011 - d_acc: 0.5591 - kl_divergence: 0.0000e+00
Epoch 126/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7372 - d_loss: 1.3637 - aux_loss: 9.5556e-04 - d_acc: 0.5582 - kl_divergence: 0.0000e+00
Epoch 127/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7385 - d_loss: 1.3631 - aux_loss: 0.0012 - d_acc: 0.5585 - kl_divergence: 0.0000e+00
Epoch 128/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7379 - d_loss: 1.3651 - aux_loss: 0.0011 - d_acc: 0.5544 - kl_divergence: 0.0000e+00
Epoch 129/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7358 - d_loss: 1.3643 - aux_loss: 0.0012 - d_acc: 0.5611 - kl_divergence: 0.0000e+00
Epoch 130/200
391/391 [==============================] - ETA: 0s - g_loss: 0.7350 - d_loss: 1.3656 - aux_loss: 0.0010 - d_acc: 0.5543 - kl_divergence: 0.0000e+00
Saving Model Weights at Epoch 130.

391/391 [==============================] - 22s 55ms/step - g_loss: 0.7350 - d_loss: 1.3656 - aux_loss: 0.0010 - d_acc: 0.5543 - kl_divergence: 0.0000e+00
Epoch 131/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7331 - d_loss: 1.3654 - aux_loss: 0.0011 - d_acc: 0.5570 - kl_divergence: 0.0000e+00
Epoch 132/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7340 - d_loss: 1.3688 - aux_loss: 9.9565e-04 - d_acc: 0.5462 - kl_divergence: 0.0000e+00
Epoch 133/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7345 - d_loss: 1.3673 - aux_loss: 0.0012 - d_acc: 0.5537 - kl_divergence: 0.0000e+00
Epoch 134/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7355 - d_loss: 1.3676 - aux_loss: 0.0011 - d_acc: 0.5496 - kl_divergence: 0.0000e+00
Epoch 135/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7323 - d_loss: 1.3666 - aux_loss: 9.5803e-04 - d_acc: 0.5533 - kl_divergence: 0.0000e+00
Epoch 136/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7330 - d_loss: 1.3679 - aux_loss: 9.3693e-04 - d_acc: 0.5543 - kl_divergence: 0.0000e+00
Epoch 137/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7334 - d_loss: 1.3686 - aux_loss: 9.6621e-04 - d_acc: 0.5489 - kl_divergence: 0.0000e+00
Epoch 138/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7331 - d_loss: 1.3670 - aux_loss: 8.7865e-04 - d_acc: 0.5543 - kl_divergence: 0.0000e+00
Epoch 139/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7341 - d_loss: 1.3677 - aux_loss: 0.0010 - d_acc: 0.5502 - kl_divergence: 0.0000e+00
Epoch 140/200
389/391 [============================>.] - ETA: 0s - g_loss: 0.7321 - d_loss: 1.3659 - aux_loss: 8.6094e-04 - d_acc: 0.5528 - kl_divergence: 0.0000e+00
Saving Model Weights at Epoch 140.

391/391 [==============================] - 23s 59ms/step - g_loss: 0.7320 - d_loss: 1.3659 - aux_loss: 8.6801e-04 - d_acc: 0.5525 - kl_divergence: 0.0000e+00
Epoch 141/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7349 - d_loss: 1.3664 - aux_loss: 0.0010 - d_acc: 0.5507 - kl_divergence: 0.0000e+00
Epoch 142/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7328 - d_loss: 1.3669 - aux_loss: 8.4332e-04 - d_acc: 0.5529 - kl_divergence: 0.0000e+00
Epoch 143/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7343 - d_loss: 1.3675 - aux_loss: 9.7061e-04 - d_acc: 0.5488 - kl_divergence: 0.0000e+00
Epoch 144/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7339 - d_loss: 1.3669 - aux_loss: 9.5718e-04 - d_acc: 0.5518 - kl_divergence: 0.0000e+00
Epoch 145/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7323 - d_loss: 1.3685 - aux_loss: 9.7655e-04 - d_acc: 0.5500 - kl_divergence: 0.0000e+00
Epoch 146/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7312 - d_loss: 1.3690 - aux_loss: 9.4814e-04 - d_acc: 0.5507 - kl_divergence: 0.0000e+00
Epoch 147/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7332 - d_loss: 1.3679 - aux_loss: 8.8830e-04 - d_acc: 0.5472 - kl_divergence: 0.0000e+00
Epoch 148/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7345 - d_loss: 1.3672 - aux_loss: 0.0011 - d_acc: 0.5523 - kl_divergence: 0.0000e+00
Epoch 149/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7347 - d_loss: 1.3665 - aux_loss: 8.7903e-04 - d_acc: 0.5508 - kl_divergence: 0.0000e+00
Epoch 150/200
1/1 [==============================] - 1s 1s/steps - g_loss: 0.7336 - d_loss: 1.3656 - aux_loss: 8.4939e-04 - d_acc: 0.5550 - kl_divergence: 0.0000e+
1/1 [==============================] - 0s 37ms/step
1/1 [==============================] - 0s 40ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 38ms/step
1/1 [==============================] - 0s 35ms/step
1/1 [==============================] - 0s 44ms/step
1/1 [==============================] - 0s 40ms/step
1/1 [==============================] - 0s 39ms/step
1/1 [==============================] - 0s 38ms/step
4/4 [==============================] - 1s 139ms/step
4/4 [==============================] - 0s 31ms/step
Epoch 150: Average (IS): 2.86826491355896 | Std (IS): 0.32923057675361633 | FID Score: 226.21423759424874

Saving Model Weights at Epoch 150.

391/391 [==============================] - 38s 97ms/step - g_loss: 0.7336 - d_loss: 1.3656 - aux_loss: 8.4939e-04 - d_acc: 0.5550 - kl_divergence: 0.0000e+00
Epoch 151/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7335 - d_loss: 1.3656 - aux_loss: 9.8048e-04 - d_acc: 0.5541 - kl_divergence: 0.0000e+00
Epoch 152/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7338 - d_loss: 1.3674 - aux_loss: 0.0011 - d_acc: 0.5496 - kl_divergence: 0.0000e+00
Epoch 153/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7338 - d_loss: 1.3645 - aux_loss: 8.9345e-04 - d_acc: 0.5557 - kl_divergence: 0.0000e+00
Epoch 154/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7318 - d_loss: 1.3674 - aux_loss: 0.0012 - d_acc: 0.5467 - kl_divergence: 0.0000e+00
Epoch 155/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7329 - d_loss: 1.3658 - aux_loss: 8.9406e-04 - d_acc: 0.5507 - kl_divergence: 0.0000e+00
Epoch 156/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7330 - d_loss: 1.3650 - aux_loss: 8.1746e-04 - d_acc: 0.5570 - kl_divergence: 0.0000e+00
Epoch 157/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7328 - d_loss: 1.3654 - aux_loss: 8.7912e-04 - d_acc: 0.5542 - kl_divergence: 0.0000e+00
Epoch 158/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7355 - d_loss: 1.3661 - aux_loss: 8.6914e-04 - d_acc: 0.5481 - kl_divergence: 0.0000e+00
Epoch 159/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7311 - d_loss: 1.3651 - aux_loss: 0.0010 - d_acc: 0.5546 - kl_divergence: 0.0000e+00
Epoch 160/200
391/391 [==============================] - ETA: 0s - g_loss: 0.7338 - d_loss: 1.3650 - aux_loss: 8.0050e-04 - d_acc: 0.5523 - kl_divergence: 0.0000e+00
Saving Model Weights at Epoch 160.

391/391 [==============================] - 21s 54ms/step - g_loss: 0.7338 - d_loss: 1.3650 - aux_loss: 8.0050e-04 - d_acc: 0.5523 - kl_divergence: 0.0000e+00
Epoch 161/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7338 - d_loss: 1.3652 - aux_loss: 8.6686e-04 - d_acc: 0.5527 - kl_divergence: 0.0000e+00
Epoch 162/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7343 - d_loss: 1.3654 - aux_loss: 9.0064e-04 - d_acc: 0.5561 - kl_divergence: 0.0000e+00
Epoch 163/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7322 - d_loss: 1.3651 - aux_loss: 9.0452e-04 - d_acc: 0.5563 - kl_divergence: 0.0000e+00
Epoch 164/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7328 - d_loss: 1.3657 - aux_loss: 0.0011 - d_acc: 0.5492 - kl_divergence: 0.0000e+00
Epoch 165/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7328 - d_loss: 1.3660 - aux_loss: 6.8744e-04 - d_acc: 0.5528 - kl_divergence: 0.0000e+00
Epoch 166/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7324 - d_loss: 1.3657 - aux_loss: 8.6071e-04 - d_acc: 0.5521 - kl_divergence: 0.0000e+00
Epoch 167/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7328 - d_loss: 1.3654 - aux_loss: 7.8754e-04 - d_acc: 0.5553 - kl_divergence: 0.0000e+00
Epoch 168/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7323 - d_loss: 1.3656 - aux_loss: 7.9706e-04 - d_acc: 0.5568 - kl_divergence: 0.0000e+00
Epoch 169/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7315 - d_loss: 1.3649 - aux_loss: 7.3880e-04 - d_acc: 0.5551 - kl_divergence: 0.0000e+00
Epoch 170/200
390/391 [============================>.] - ETA: 0s - g_loss: 0.7326 - d_loss: 1.3657 - aux_loss: 8.7404e-04 - d_acc: 0.5524 - kl_divergence: 0.0000e+00
Saving Model Weights at Epoch 170.

391/391 [==============================] - 21s 53ms/step - g_loss: 0.7325 - d_loss: 1.3657 - aux_loss: 8.7249e-04 - d_acc: 0.5526 - kl_divergence: 0.0000e+00
Epoch 171/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7314 - d_loss: 1.3672 - aux_loss: 8.9102e-04 - d_acc: 0.5525 - kl_divergence: 0.0000e+00
Epoch 172/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7322 - d_loss: 1.3670 - aux_loss: 7.5378e-04 - d_acc: 0.5545 - kl_divergence: 0.0000e+00
Epoch 173/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7308 - d_loss: 1.3639 - aux_loss: 6.6626e-04 - d_acc: 0.5578 - kl_divergence: 0.0000e+00
Epoch 174/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7327 - d_loss: 1.3656 - aux_loss: 7.5086e-04 - d_acc: 0.5516 - kl_divergence: 0.0000e+00
Epoch 175/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7310 - d_loss: 1.3663 - aux_loss: 7.8209e-04 - d_acc: 0.5498 - kl_divergence: 0.0000e+00
Epoch 176/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7292 - d_loss: 1.3672 - aux_loss: 7.3615e-04 - d_acc: 0.5516 - kl_divergence: 0.0000e+00
Epoch 177/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7299 - d_loss: 1.3672 - aux_loss: 7.2984e-04 - d_acc: 0.5520 - kl_divergence: 0.0000e+00
Epoch 178/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7269 - d_loss: 1.3671 - aux_loss: 8.2459e-04 - d_acc: 0.5522 - kl_divergence: 0.0000e+00
Epoch 179/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7277 - d_loss: 1.3671 - aux_loss: 8.0205e-04 - d_acc: 0.5539 - kl_divergence: 0.0000e+00
Epoch 180/200
390/391 [============================>.] - ETA: 0s - g_loss: 0.7268 - d_loss: 1.3688 - aux_loss: 8.1190e-04 - d_acc: 0.5475 - kl_divergence: 0.0000e+00
Saving Model Weights at Epoch 180.

391/391 [==============================] - 23s 58ms/step - g_loss: 0.7268 - d_loss: 1.3689 - aux_loss: 8.1223e-04 - d_acc: 0.5473 - kl_divergence: 0.0000e+00
Epoch 181/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7255 - d_loss: 1.3692 - aux_loss: 6.5143e-04 - d_acc: 0.5497 - kl_divergence: 0.0000e+00
Epoch 182/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7258 - d_loss: 1.3697 - aux_loss: 7.9959e-04 - d_acc: 0.5488 - kl_divergence: 0.0000e+00
Epoch 183/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7248 - d_loss: 1.3698 - aux_loss: 7.6391e-04 - d_acc: 0.5488 - kl_divergence: 0.0000e+00
Epoch 184/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7251 - d_loss: 1.3696 - aux_loss: 7.5249e-04 - d_acc: 0.5497 - kl_divergence: 0.0000e+00
Epoch 185/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7227 - d_loss: 1.3711 - aux_loss: 6.9106e-04 - d_acc: 0.5427 - kl_divergence: 0.0000e+00
Epoch 186/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7239 - d_loss: 1.3708 - aux_loss: 7.9433e-04 - d_acc: 0.5452 - kl_divergence: 0.0000e+00
Epoch 187/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7237 - d_loss: 1.3719 - aux_loss: 6.7830e-04 - d_acc: 0.5431 - kl_divergence: 0.0000e+00
Epoch 188/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7209 - d_loss: 1.3707 - aux_loss: 7.0556e-04 - d_acc: 0.5457 - kl_divergence: 0.0000e+00
Epoch 189/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7217 - d_loss: 1.3719 - aux_loss: 6.9864e-04 - d_acc: 0.5424 - kl_divergence: 0.0000e+00
Epoch 190/200
391/391 [==============================] - ETA: 0s - g_loss: 0.7201 - d_loss: 1.3733 - aux_loss: 5.6128e-04 - d_acc: 0.5364 - kl_divergence: 0.0000e+00
Saving Model Weights at Epoch 190.

391/391 [==============================] - 21s 53ms/step - g_loss: 0.7201 - d_loss: 1.3733 - aux_loss: 5.6128e-04 - d_acc: 0.5364 - kl_divergence: 0.0000e+00
Epoch 191/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7203 - d_loss: 1.3717 - aux_loss: 5.9787e-04 - d_acc: 0.5453 - kl_divergence: 0.0000e+00
Epoch 192/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7196 - d_loss: 1.3724 - aux_loss: 6.2682e-04 - d_acc: 0.5422 - kl_divergence: 0.0000e+00
Epoch 193/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7223 - d_loss: 1.3728 - aux_loss: 7.3977e-04 - d_acc: 0.5393 - kl_divergence: 0.0000e+00
Epoch 194/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7199 - d_loss: 1.3732 - aux_loss: 5.5401e-04 - d_acc: 0.5380 - kl_divergence: 0.0000e+00
Epoch 195/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7195 - d_loss: 1.3728 - aux_loss: 7.8990e-04 - d_acc: 0.5457 - kl_divergence: 0.0000e+00
Epoch 196/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7199 - d_loss: 1.3729 - aux_loss: 5.5724e-04 - d_acc: 0.5391 - kl_divergence: 0.0000e+00
Epoch 197/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7196 - d_loss: 1.3728 - aux_loss: 6.8270e-04 - d_acc: 0.5423 - kl_divergence: 0.0000e+00
Epoch 198/200
391/391 [==============================] - 13s 34ms/step - g_loss: 0.7191 - d_loss: 1.3744 - aux_loss: 7.5630e-04 - d_acc: 0.5372 - kl_divergence: 0.0000e+00
Epoch 199/200
391/391 [==============================] - 13s 33ms/step - g_loss: 0.7192 - d_loss: 1.3738 - aux_loss: 6.5806e-04 - d_acc: 0.5372 - kl_divergence: 0.0000e+00
Epoch 200/200
1/1 [==============================] - 1s 947ms/step g_loss: 0.7182 - d_loss: 1.3742 - aux_loss: 6.0151e-04 - d_acc: 0.5413 - kl_divergence: 0.0000e+
1/1 [==============================] - 0s 40ms/step
1/1 [==============================] - 0s 40ms/step
1/1 [==============================] - 0s 38ms/step
1/1 [==============================] - 0s 53ms/step
1/1 [==============================] - 0s 32ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 41ms/step
4/4 [==============================] - 1s 108ms/step
4/4 [==============================] - 0s 29ms/step
Epoch 200: Average (IS): 2.711833953857422 | Std (IS): 0.39886319637298584 | FID Score: 220.3420680738184

Saving Model Weights at Epoch 200.

391/391 [==============================] - 37s 95ms/step - g_loss: 0.7184 - d_loss: 1.3743 - aux_loss: 6.0143e-04 - d_acc: 0.5410 - kl_divergence: 0.0000e+00

DISPLAYING BEST FID AND INCEPTION SCORES FOR ACGAN

  • Based on the scores for ACGAN, we see that the FID score for ACGAN is slightly better than that of SNGAN, suggesting that there is a slightly smaller difference between the distributions of the generated images and the real images.
  • In terms of KL Divergence, we see that it has a score of 0, indicating that the ACGAN may have overfitted, due to an anomaly in the measurement process, or a saturation of the metric where the actual value might be so small that it is effectively zero. This could indicate that the images generated would not be the best when compared to our other models.
In [85]:
monitor = callbacks[0]

# Extract the best KL Divergence
best_kl_div = min(history.history['kl_divergence'])

# Extract the best FID Score
best_fid = min(monitor.acgan_fid_scores) if monitor.acgan_fid_scores else None

# Extract the best IS Score (average)
best_is_avg = max(is_avg for is_avg, _ in monitor.acgan_is_scores) if monitor.acgan_is_scores else None

# Create a DataFrame to store these best values
acgan_df = pd.DataFrame({
    'Best KL Divergence': [best_kl_div],
    'Best FID': [best_fid],
    'Best IS': [best_is_avg]
})

# Display the DataFrame
acgan_df
Out[85]:
Best KL Divergence Best FID Best IS
0 0.0 220.342068 2.868265

PLOTTING THE MODEL'S PERFORMANCE OVER TIME

  • From the KL Divergence, we see that the it is nearly flat with the values hovering around zero. This effectively suggests that there is very little divergence between probability distributions of the real and generated data.
  • For the discriminator accuracy, we see that there is a sharp initial decrease from a high value, suggesting the discriminator's ability to distinguish between real and generated images improved rapidly at the start of training. Following this drop, the accuracy gradually levels off to a value just above 0.55, or 55%. This indicates that the discriminator is performing slightly better than random chance (which would be 50%), and the generator is producing images that are somewhat challenging for the discriminator to classify accurately.
  • For the losses, the generator loss starts very high, indicating the initial outputs from the generator were easily distinguishable from real data. It drops significantly and quickly stabilizes, which is indicative of the generator learning to produce more convincing data. In contrast, for the discriminator losses, it also starts high and drops quickly, which suggests the discriminator rapidly became better at distinguishing real from fake. However, its loss flattens out as well, which is a sign that it is facing a consistent challenge from the improving generator.
  • Lastly, for the auxiliary loss, it is relatively stable throughout training, suggesting that the auxiliary classifier component of the GAN is performing consistently.
In [88]:
plot_model_performance_acgan(history)
No description has been provided for this image

LOADING AND TESTING THE GENERATOR WEIGHTS ON SYNTHETIC IMAGES

In [89]:
# Loading and testing the generator's weights
generator.load_weights('modelweights/acgan/epoch_200/generator_weights_epoch_200.h5')
generator.summary()
Model: "ACGAN_Generator"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================
 Noise_Input (InputLayer)       [(None, 128)]        0           []                               
                                                                                                  
 Label_Input (InputLayer)       [(None, 1)]          0           []                               
                                                                                                  
 Noise_Dense (Dense)            (None, 2048)         264192      ['Noise_Input[0][0]']            
                                                                                                  
 Label_Embedding (Embedding)    (None, 1, 10)        100         ['Label_Input[0][0]']            
                                                                                                  
 Noise_ReLU (ReLU)              (None, 2048)         0           ['Noise_Dense[0][0]']            
                                                                                                  
 Label_Dense (Dense)            (None, 1, 16)        176         ['Label_Embedding[0][0]']        
                                                                                                  
 Noise_Reshape (Reshape)        (None, 4, 4, 128)    0           ['Noise_ReLU[0][0]']             
                                                                                                  
 Label_Reshape (Reshape)        (None, 4, 4, 1)      0           ['Label_Dense[0][0]']            
                                                                                                  
 Concatenate (Concatenate)      (None, 4, 4, 129)    0           ['Noise_Reshape[0][0]',          
                                                                  'Label_Reshape[0][0]']          
                                                                                                  
 Conv1 (Conv2DTranspose)        (None, 8, 8, 128)    264320      ['Concatenate[0][0]']            
                                                                                                  
 Conv1_ReLU (ReLU)              (None, 8, 8, 128)    0           ['Conv1[0][0]']                  
                                                                                                  
 Conv2 (Conv2DTranspose)        (None, 16, 16, 128)  262272      ['Conv1_ReLU[0][0]']             
                                                                                                  
 Conv2_ReLU (ReLU)              (None, 16, 16, 128)  0           ['Conv2[0][0]']                  
                                                                                                  
 Conv3 (Conv2DTranspose)        (None, 32, 32, 128)  262272      ['Conv2_ReLU[0][0]']             
                                                                                                  
 Conv3_ReLU (ReLU)              (None, 32, 32, 128)  0           ['Conv3[0][0]']                  
                                                                                                  
 Output (Conv2D)                (None, 32, 32, 3)    3459        ['Conv3_ReLU[0][0]']             
                                                                                                  
==================================================================================================
Total params: 1,056,791
Trainable params: 1,056,791
Non-trainable params: 0
__________________________________________________________________________________________________
In [90]:
# Generate random latent vectors and class labels
latent_vectors = tf.random.normal(shape=(100, LATENT_DIM))
class_labels = tf.reshape(tf.range(10), shape=(10, 1))
class_labels = tf.tile(class_labels, multiples=(1, 10))
class_labels = tf.reshape(class_labels, shape=(100, 1))

# Generate images using the loaded generator
generated_images = generator([latent_vectors, class_labels], training=False)
generated_images = (generated_images + 1) / 2

# Create a dictionary to map class labels to their corresponding names
label_map = {
    0: 'Airplane',
    1: 'Automobile',
    2: 'Bird',
    3: 'Cat',
    4: 'Deer',
    5: 'Dog',
    6: 'Frog',
    7: 'Horse',
    8: 'Ship',
    9: 'Truck'
}

# Create a grid of subplots and display generated images with labels
fig, axes = plt.subplots(10, 10, figsize=(20, 20))
axes = axes.flatten()

for i, ax in enumerate(axes):
    ax.imshow(generated_images[i], cmap='gray')
    ax.set_title(label_map[class_labels[i].numpy().item()], fontsize=16)
    ax.axis('off')

plt.tight_layout()
plt.show()
No description has been provided for this image

EVALUATION OF INITIAL GAN MODELS¶

To evaluate our models for us to decide which models to tune, we will be performing 3 different types of analysis on our models thus far. These mainly include :

  • Image Fidelity
  • Latent Space Evolution

EVALUATING IMAGE FIDELITY WITH FID, IS AND KL DIVERGENCE

Here, we will use KL Divergence, Inception Score and Frechet Inception Distance to analyze the metrics. The reason we use these two is because we want to see how our model stacks up to the best models in the world. As other metrics such as iFID and KID are difficult to find on the benchmarks as they are less popular, we avoid them, despite metrics such as KID proving to be a strong and robust metric.

In [100]:
# Add a 'Model' column to each DataFrame
dcgan_df['Model'] = 'DCGAN'
cdcgan_df['Model'] = 'cDCGAN'
sngan_df['Model'] = 'SNGAN'
acgan_df['Model'] = 'ACGAN'

# Concatenate all DataFrames into one, ensuring the model names are included
final_df = pd.concat([dcgan_df, cdcgan_df, sngan_df, acgan_df], ignore_index=True)
final_df = final_df[['Model', 'Best KL Divergence', 'Best FID', 'Best IS']]

# Display the combined DataFrame
final_df
Out[100]:
Model Best KL Divergence Best FID Best IS
0 DCGAN 4.918345 216.878999 3.357959
1 cDCGAN 4.565869 209.350541 2.713431
2 SNGAN 4.896236 225.471866 2.910617
3 ACGAN 0.000000 220.342068 2.868265

ANALYSIS OF IMAGE FIDELITY

  • From our analysis, a strong candidate for model improvement is cDCGAN as it returned the lowest FID scores, and since our goal is to improve the image quality and similarity to the real data provided, we would likely choose cDCGAN for further improvement.
  • In terms of Inception Scores, although slightly lower in comparison to the other models, we will still choose cDCGAN for further experimental improvement as IS only represents image quality, but our focus is on generating images that make sense, hence FID will be a more suitable metric to look at.

EVALUATING LATENT SPACE EVOLUTION

For latent space evolution, we will be assessing how the images change over time and how its quality improves over time. To visualize this, we will plot the animations of the image improvements over time, allowing us to see how exactly the images get better (or worse) as training progresses.

We will first define a function that enables us to display the gifs for each of the models.

In [142]:
def display_gif(gif):
    return HTML('<img src="{}">'.format(gif))

DCGAN

In [144]:
# Check if the directory exists, and create it if it doesn't
directory_path = './animation/dcgan'
if not os.path.exists(directory_path):
    os.makedirs(directory_path)

# Collect image paths
images = [] 
gan_img_paths = glob.glob("./images/dcgan_images/*.png") 

# Load images
for path in gan_img_paths: 
    images.append(imageio.imread(path)) 
    
# Save the GIF in the created directory
imageio.mimsave(os.path.join(directory_path, 'DCGAN.gif'), images, duration=0.2)
filename=os.path.join(directory_path, 'DCGAN.gif')

# Display the GIF
display_gif(filename)
Out[144]:
No description has been provided for this image

cDCGAN

In [20]:
# Check if the directory exists, and create it if it doesn't
directory_path = './animation/cdcgan'
if not os.path.exists(directory_path):
    os.makedirs(directory_path)

# Collect image paths
images = [] 
gan_img_paths = glob.glob("./images/cdcgan_images/*.png") 

# Load images
for path in gan_img_paths: 
    images.append(imageio.imread(path)) 
    
# Save the GIF in the created directory
imageio.mimsave(os.path.join(directory_path, 'cDCGAN.gif'), images, duration=0.2)
filename=os.path.join(directory_path, 'cDCGAN.gif')

# Display the GIF
display_gif(filename)
Out[20]:
No description has been provided for this image

SNGAN

In [21]:
# Check if the directory exists, and create it if it doesn't
directory_path = './animation/sngan'
if not os.path.exists(directory_path):
    os.makedirs(directory_path)

# Collect image paths
images = [] 
gan_img_paths = glob.glob("./images/sngan_images/*.png") 

# Load images
for path in gan_img_paths: 
    images.append(imageio.imread(path)) 
    
# Save the GIF in the created directory
imageio.mimsave(os.path.join(directory_path, 'SNGAN.gif'), images, duration=0.2)
filename=os.path.join(directory_path, 'SNGAN.gif')

# Display the GIF
display_gif(filename)
Out[21]:
No description has been provided for this image

ACGAN

In [22]:
# Check if the directory exists, and create it if it doesn't
directory_path = './animation/acgan'
if not os.path.exists(directory_path):
    os.makedirs(directory_path)

# Collect image paths
images = [] 
gan_img_paths = glob.glob("./images/acgan_images/*.png") 

# Load images
for path in gan_img_paths: 
    images.append(imageio.imread(path)) 
    
# Save the GIF in the created directory
imageio.mimsave(os.path.join(directory_path, 'ACGAN.gif'), images, duration=0.2)
filename=os.path.join(directory_path, 'ACGAN.gif')

# Display the GIF
display_gif(filename)
Out[22]:
No description has been provided for this image

ANALYSIS OF IMAGE TRANSFORMATION OVER TIME

  • After viewing the gifs of the images, we can see that although the quality and resolution of ACGAN images improved the most over time, the cCDGAN images ended up with images that best resembled the real images with slow and gradual changes, hence we will be choosing cDCGAN for further improvement and experimental purposes.

MODEL IMPROVEMENT FOR GAN MODELS¶

  • For model improvement, we will be manually tuning our cDCGAN model, which provided the best FID score across all the models.
  • To improve our model, we will be adding BatchNormalization, PReLU, and Gaussian Weight Initialization. We also increased the number of epochs from 200 to 300 to see if training our model for a longer period of time will help in the generation of better images.

WHY BATCH NORMALIZATION? - Potentially helps to stabilize training as it normalizes the activations of each layer in a mini-batch during training, helping reduce internal covariate shift, making it easier for the model to converge.

WHY USE PReLU? - Adaptively learns the slope of the rectified linear unit (ReLU) during training. Unlike traditional ReLU, which has a fixed slope (0 for negative values), PReLU allows negative values to have a learned slope, making it suitable for handling the vanishing gradient problem.

WHY USE GAUSSIAN WEIGHT INITIALIZATION? - He and Xavier weights initialization are unsuitable for training as it makes training more unstable due to the larger distribution of weights. To reduce the instability of training, we will be using Gaussian Weight Initialization[mean = 0, std = 0.02]

BEST MODEL TO BE USED FOR IMPROVEMENT : cDCGAN MODEL WITH GRADIENT TAPE

BUILDING THE TUNED CDCGAN GENERATOR FUNCTION

In [106]:
def create_generator(latent_dim):
    # Weight initializer
    init = RandomNormal(mean=0.0, stddev=0.02)

    # Label embedding and input
    label_input = Input(shape=(1,), name='Label_Input')
    label_embedding = Embedding(10, 50, embeddings_initializer=init, name='Label_Embedding')(label_input)
    label_embedding = Dense(4*4, kernel_initializer=init, name='Label_Dense')(label_embedding)
    label_embedding = Reshape((4, 4, 1), name='Label_Reshape')(label_embedding)

    # Noise input
    noise_input = Input(shape=(latent_dim,), name='Noise_Input')
    noise_dense = Dense(4*4*128, kernel_initializer=init, name='Noise_Dense')(noise_input)
    noise_dense = LeakyReLU(alpha=0.2, name='Noise_LeakyReLU')(noise_dense)
    noise_reshape = Reshape((4, 4, 128), name='Noise_Reshape')(noise_dense)

    # Combine noise and label
    concat = Concatenate(name='Concatenate')([noise_reshape, label_embedding])

    # Convolutional layers
    conv1 = Conv2DTranspose(128, (4, 4), strides=(2, 2), padding='same', kernel_initializer=init, name='Conv1')(concat)
    conv1 = BatchNormalization(name='Conv1_BatchNorm', momentum=0.8)(conv1)
    conv1 = PReLU(name='Conv1_PReLU')(conv1)

    conv2 = Conv2DTranspose(128, (4, 4), strides=(2, 2), padding='same', kernel_initializer=init, name='Conv2')(conv1)
    conv2 = BatchNormalization(name='Conv2_BatchNorm', momentum=0.8)(conv2)
    conv2 = PReLU(name='Conv2_PReLU')(conv2)

    conv3 = Conv2DTranspose(128, (4, 4), strides=(2, 2), padding='same', kernel_initializer=init, name='Conv3')(conv2)
    conv3 = BatchNormalization(name='Conv3_BatchNorm', momentum=0.8)(conv3)
    conv3 = PReLU(name='Conv3_PReLU')(conv3)

    # Output layer
    output = Conv2D(3, (3, 3), activation='tanh', padding='same', kernel_initializer=init, name='Output')(conv3)
    model = Model(inputs=[noise_input, label_input], outputs=output, name='Improved_cDCGAN_Generator')
    
    return model
In [107]:
create_generator(latent_dim=128).summary()
Model: "Improved_cDCGAN_Generator"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================
 Noise_Input (InputLayer)       [(None, 128)]        0           []                               
                                                                                                  
 Label_Input (InputLayer)       [(None, 1)]          0           []                               
                                                                                                  
 Noise_Dense (Dense)            (None, 2048)         264192      ['Noise_Input[0][0]']            
                                                                                                  
 Label_Embedding (Embedding)    (None, 1, 50)        500         ['Label_Input[0][0]']            
                                                                                                  
 Noise_LeakyReLU (LeakyReLU)    (None, 2048)         0           ['Noise_Dense[0][0]']            
                                                                                                  
 Label_Dense (Dense)            (None, 1, 16)        816         ['Label_Embedding[0][0]']        
                                                                                                  
 Noise_Reshape (Reshape)        (None, 4, 4, 128)    0           ['Noise_LeakyReLU[0][0]']        
                                                                                                  
 Label_Reshape (Reshape)        (None, 4, 4, 1)      0           ['Label_Dense[0][0]']            
                                                                                                  
 Concatenate (Concatenate)      (None, 4, 4, 129)    0           ['Noise_Reshape[0][0]',          
                                                                  'Label_Reshape[0][0]']          
                                                                                                  
 Conv1 (Conv2DTranspose)        (None, 8, 8, 128)    264320      ['Concatenate[0][0]']            
                                                                                                  
 Conv1_BatchNorm (BatchNormaliz  (None, 8, 8, 128)   512         ['Conv1[0][0]']                  
 ation)                                                                                           
                                                                                                  
 Conv1_PReLU (PReLU)            (None, 8, 8, 128)    8192        ['Conv1_BatchNorm[0][0]']        
                                                                                                  
 Conv2 (Conv2DTranspose)        (None, 16, 16, 128)  262272      ['Conv1_PReLU[0][0]']            
                                                                                                  
 Conv2_BatchNorm (BatchNormaliz  (None, 16, 16, 128)  512        ['Conv2[0][0]']                  
 ation)                                                                                           
                                                                                                  
 Conv2_PReLU (PReLU)            (None, 16, 16, 128)  32768       ['Conv2_BatchNorm[0][0]']        
                                                                                                  
 Conv3 (Conv2DTranspose)        (None, 32, 32, 128)  262272      ['Conv2_PReLU[0][0]']            
                                                                                                  
 Conv3_BatchNorm (BatchNormaliz  (None, 32, 32, 128)  512        ['Conv3[0][0]']                  
 ation)                                                                                           
                                                                                                  
 Conv3_PReLU (PReLU)            (None, 32, 32, 128)  131072      ['Conv3_BatchNorm[0][0]']        
                                                                                                  
 Output (Conv2D)                (None, 32, 32, 3)    3459        ['Conv3_PReLU[0][0]']            
                                                                                                  
==================================================================================================
Total params: 1,231,399
Trainable params: 1,230,631
Non-trainable params: 768
__________________________________________________________________________________________________

BUILDING THE TUNED CDCGAN DISCRIMINATOR FUNCTION

In [108]:
def create_discriminator():
    # Weight initializer
    init = RandomNormal(mean=0.0, stddev=0.02)

    # Label embedding and input
    label_input = Input(shape=(1,), name='Label_Input')
    label_embedding = Embedding(10, 50, embeddings_initializer=init, name='Label_Embedding')(label_input)
    label_embedding = Dense(32*32, kernel_initializer=init, name='Label_Dense')(label_embedding)
    label_embedding = Reshape((32, 32, 1), name='Label_Reshape')(label_embedding)

    # Image input
    image_input = Input(shape=(32, 32, 3), name='Image_Input')

    # Combine image and label
    concat = Concatenate(name='Concatenate')([image_input, label_embedding])

    # Convolutional layers
    conv1 = Conv2D(128, kernel_size=3, strides=2, padding='same', kernel_initializer=init, name='Conv1')(concat)
    conv1 = PReLU(name='Conv1_PReLU')(conv1)

    conv2 = Conv2D(128, kernel_size=3, strides=2, padding='same', kernel_initializer=init, name='Conv2')(conv1)
    conv2 = PReLU(name='Conv2_PReLU')(conv2)

    conv3 = Conv2D(128, kernel_size=3, strides=2, padding='same', kernel_initializer=init, name='Conv3')(conv2)
    conv3 = PReLU(name='Conv3_PReLU')(conv3)

    # Output layer
    flat = Flatten(name='Flatten')(conv3)
    output = Dense(1, activation='sigmoid', kernel_initializer=init, name='Output')(flat)

    # Model definition
    model = Model(inputs=[image_input, label_input], outputs=output, name='Improved_cDCGAN_Discriminator')
    return model
In [109]:
create_discriminator().summary()
Model: "Improved_cDCGAN_Discriminator"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================
 Label_Input (InputLayer)       [(None, 1)]          0           []                               
                                                                                                  
 Label_Embedding (Embedding)    (None, 1, 50)        500         ['Label_Input[0][0]']            
                                                                                                  
 Label_Dense (Dense)            (None, 1, 1024)      52224       ['Label_Embedding[0][0]']        
                                                                                                  
 Image_Input (InputLayer)       [(None, 32, 32, 3)]  0           []                               
                                                                                                  
 Label_Reshape (Reshape)        (None, 32, 32, 1)    0           ['Label_Dense[0][0]']            
                                                                                                  
 Concatenate (Concatenate)      (None, 32, 32, 4)    0           ['Image_Input[0][0]',            
                                                                  'Label_Reshape[0][0]']          
                                                                                                  
 Conv1 (Conv2D)                 (None, 16, 16, 128)  4736        ['Concatenate[0][0]']            
                                                                                                  
 Conv1_PReLU (PReLU)            (None, 16, 16, 128)  32768       ['Conv1[0][0]']                  
                                                                                                  
 Conv2 (Conv2D)                 (None, 8, 8, 128)    147584      ['Conv1_PReLU[0][0]']            
                                                                                                  
 Conv2_PReLU (PReLU)            (None, 8, 8, 128)    8192        ['Conv2[0][0]']                  
                                                                                                  
 Conv3 (Conv2D)                 (None, 4, 4, 128)    147584      ['Conv2_PReLU[0][0]']            
                                                                                                  
 Conv3_PReLU (PReLU)            (None, 4, 4, 128)    2048        ['Conv3[0][0]']                  
                                                                                                  
 Flatten (Flatten)              (None, 2048)         0           ['Conv3_PReLU[0][0]']            
                                                                                                  
 Output (Dense)                 (None, 1)            2049        ['Flatten[0][0]']                
                                                                                                  
==================================================================================================
Total params: 397,685
Trainable params: 397,685
Non-trainable params: 0
__________________________________________________________________________________________________

BUILDING THE TRAINING FUNCTIONS AND CLASSES FOR IMPROVED CDCGAN

In [110]:
class TunedConditionalDCGAN(Model):
    def __init__(self, generator, discriminator, latent_dim):
        super(TunedConditionalDCGAN, self).__init__()
        self.generator = generator
        self.discriminator = discriminator
        self.latent_dim = latent_dim

    def compile(self, d_optimizer, g_optimizer, loss_fn):
        super(TunedConditionalDCGAN, self).compile()
        self.d_optimizer = d_optimizer
        self.g_optimizer = g_optimizer
        self.loss_fn = loss_fn
        self.g_loss_metric = keras.metrics.Mean(name='g_loss')
        self.d_real_loss_metric = keras.metrics.Mean(name='d_real_loss')
        self.d_fake_loss_metric = keras.metrics.Mean(name='d_fake_loss')
        self.d_acc_metric = keras.metrics.BinaryAccuracy(name='d_acc')
        self.kl_metric = keras.metrics.KLDivergence()

    @property
    def metrics(self):
        return [self.g_loss_metric, self.d_real_loss_metric, self.d_fake_loss_metric, self.d_acc_metric, self.kl_metric]

    def train_step(self, data):
        real_images, class_labels = data
        class_labels = tf.cast(class_labels, 'int32')
        batch_size = tf.shape(real_images)[0]

        # train discriminator
        random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))

        fake_labels = tf.zeros((batch_size, 1))  # (batch_size, 1)
        # Adjust real_labels for label smoothing applied
        real_labels = tf.ones((batch_size, 1))

        # freeze generator
        self.discriminator.trainable = True
        self.generator.trainable = False
    
        with tf.GradientTape() as disc_tape:
            disc_tape.watch(self.discriminator.trainable_variables)
            generated_images = self.generator([random_latent_vectors, class_labels], training=True)
            real_output = self.discriminator([real_images, class_labels], training=True)
            fake_output = self.discriminator([generated_images, class_labels], training=True)
            d_loss_real = self.loss_fn(real_labels, real_output)
            d_loss_fake = self.loss_fn(fake_labels, fake_output)
            d_loss = d_loss_real + d_loss_fake  # log(D(x)) + log(1 - D(G(z))
        
        disc_grads = disc_tape.gradient(d_loss, self.discriminator.trainable_variables)
        self.d_optimizer.apply_gradients(zip(disc_grads, self.discriminator.trainable_variables))

        # train the generator
        random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))
        misleading_labels = tf.ones((batch_size, 1))

        # freeze discriminator
        self.discriminator.trainable = False
        self.generator.trainable = True

        with tf.GradientTape() as gen_tape:
            gen_tape.watch(self.generator.trainable_variables)
            generated_images = self.generator([random_latent_vectors, class_labels], training=True)
            pred_on_fake = self.discriminator([generated_images, class_labels], training=True)
            g_loss = self.loss_fn(misleading_labels, pred_on_fake)  # maximize log(D(G(z))) = minimize -log(1 - D(G(z)))
        
        gen_grads = gen_tape.gradient(g_loss, self.generator.trainable_variables)
        self.g_optimizer.apply_gradients(zip(gen_grads, self.generator.trainable_variables))

        # update metrics
        self.g_loss_metric.update_state(g_loss)
        self.d_real_loss_metric.update_state(d_loss_real)
        self.d_fake_loss_metric.update_state(d_loss_fake)
        self.d_acc_metric.update_state(real_labels, real_output)
        self.kl_metric.update_state(y_true=real_images, y_pred=generated_images)

        return {
            'g_loss': self.g_loss_metric.result(),
            'd_real_loss': self.d_real_loss_metric.result(),
            'd_fake_loss': self.d_fake_loss_metric.result(),
            'd_acc': self.d_acc_metric.result(),
            'kl_divergence': self.kl_metric.result()
        }
In [111]:
class GANMonitor(Callback):
    def __init__(self, latent_dim, label_map):
        self.latent_dim = latent_dim
        self.label_map = label_map
        self.tunedcdcgan_fid_scores = []
        self.tunedcdcgan_is_scores = []

    def on_epoch_end(self, epoch, logs=None):
        # Plot 100 generated images and save weights every 10 epochs
        latent_vectors = tf.random.normal(shape=(100, self.latent_dim))
        class_labels = tf.reshape(tf.range(10), shape=(10, 1))
        class_labels = tf.tile(class_labels, multiples=(1, 10))
        class_labels = tf.reshape(class_labels, shape=(100, 1))

        generated_images = self.model.generator([latent_vectors, class_labels], training=False)
        generated_images = (generated_images + 1) / 2

        if not os.path.exists('modelweights/tuned_cdcgan'):
            os.makedirs('modelweights/tuned_cdcgan')

        if not os.path.exists('images/tuned_cdcgan_images'):
            os.makedirs('images/tuned_cdcgan_images')
            
        if (epoch + 1) % 50 == 0:
            # Calculate FID and IS
            is_avg, is_std = calculate_inception_score(generated_images)
            fid = calculate_fid(generated_images)
            
            # Append metrics to lists
            self.tunedcdcgan_fid_scores.append(fid)
            self.tunedcdcgan_is_scores.append((is_avg, is_std))
            
            print(f'Epoch {epoch + 1}: Average (IS): {is_avg} | Std (IS): {is_std} | FID Score: {fid}')

        if (epoch + 1) % 10 == 0:
            if not os.path.exists(f'modelweights/tuned_cdcgan/epoch_{epoch + 1}'):
                os.makedirs(f'modelweights/tuned_cdcgan/epoch_{epoch + 1}')
                self.model.generator.save_weights(f'modelweights/tuned_cdcgan/epoch_{epoch + 1}/generator_weights_epoch_{epoch + 1}.h5')
                self.model.discriminator.save_weights(f'modelweights/tuned_cdcgan/epoch_{epoch + 1}/discriminator_weights_epoch_{epoch + 1}.h5')
                print(f'\nSaving Model Weights At Epoch {epoch + 1}.\n')

            fig, axes = plt.subplots(10, 10, figsize=(20, 20))
            axes = axes.flatten()

            for i, ax in enumerate(axes):
                ax.imshow(generated_images[i])
                ax.set_title(self.label_map[class_labels[i].numpy().item()], fontsize=16)
                ax.axis('off')

            plt.tight_layout()
            plt.savefig(f'images/tuned_cdcgan_images/generated_img_{epoch + 1}.png')
            plt.close()
In [112]:
# Defining Constants for the Model
EPOCHS = 300
LATENT_DIM = 128    
LEARNING_RATE = 2e-4
BETA_1 = 0.5
LABEL_SMOOTHING = 0.1

label_map = {
    0: 'Airplane',
    1: 'Automobile',
    2: 'Bird',
    3: 'Cat',
    4: 'Deer',
    5: 'Dog',
    6: 'Frog',
    7: 'Horse',
    8: 'Ship',
    9: 'Truck'
}

# Defining callbacks for the Model
callbacks = [GANMonitor(LATENT_DIM, label_map)]

generator = create_generator(LATENT_DIM)
discriminator = create_discriminator()
tunedcdcgan = TunedConditionalDCGAN(generator, discriminator, latent_dim=LATENT_DIM)
tunedcdcgan.compile(
    g_optimizer=Adam(learning_rate=LEARNING_RATE, beta_1=BETA_1),
    d_optimizer=Adam(learning_rate=LEARNING_RATE, beta_1=BETA_1),
    loss_fn=BinaryCrossentropy(label_smoothing=LABEL_SMOOTHING)
)
In [113]:
history = tunedcdcgan.fit(dataset, epochs=EPOCHS, callbacks=callbacks, use_multiprocessing=True)
Epoch 1/300
391/391 [==============================] - 56s 135ms/step - g_loss: 0.9359 - d_real_loss: 0.6332 - d_fake_loss: 0.5776 - d_acc: 0.6282 - kl_divergence: 5.3749
Epoch 2/300
391/391 [==============================] - 53s 136ms/step - g_loss: 0.7903 - d_real_loss: 0.6818 - d_fake_loss: 0.6472 - d_acc: 0.5390 - kl_divergence: 4.3765
Epoch 3/300
391/391 [==============================] - 53s 136ms/step - g_loss: 0.8670 - d_real_loss: 0.6584 - d_fake_loss: 0.6309 - d_acc: 0.6069 - kl_divergence: 4.3608
Epoch 4/300
391/391 [==============================] - 54s 137ms/step - g_loss: 0.7137 - d_real_loss: 0.6806 - d_fake_loss: 0.6839 - d_acc: 0.5694 - kl_divergence: 4.2513
Epoch 5/300
391/391 [==============================] - 54s 137ms/step - g_loss: 0.7429 - d_real_loss: 0.6893 - d_fake_loss: 0.6790 - d_acc: 0.5279 - kl_divergence: 4.3759
Epoch 6/300
391/391 [==============================] - 53s 136ms/step - g_loss: 0.7430 - d_real_loss: 0.6748 - d_fake_loss: 0.6846 - d_acc: 0.5786 - kl_divergence: 4.4922
Epoch 7/300
391/391 [==============================] - 53s 136ms/step - g_loss: 0.7915 - d_real_loss: 0.6739 - d_fake_loss: 0.6633 - d_acc: 0.5776 - kl_divergence: 4.5915
Epoch 8/300
391/391 [==============================] - 53s 137ms/step - g_loss: 0.8294 - d_real_loss: 0.6653 - d_fake_loss: 0.6521 - d_acc: 0.5791 - kl_divergence: 4.8581
Epoch 9/300
391/391 [==============================] - 53s 137ms/step - g_loss: 0.9132 - d_real_loss: 0.6535 - d_fake_loss: 0.6132 - d_acc: 0.6194 - kl_divergence: 4.5210
Epoch 10/300
391/391 [==============================] - ETA: 0s - g_loss: 0.9235 - d_real_loss: 0.6507 - d_fake_loss: 0.6218 - d_acc: 0.6002 - kl_divergence: 4.7682
Saving Model Weights At Epoch 10.

391/391 [==============================] - 62s 159ms/step - g_loss: 0.9235 - d_real_loss: 0.6507 - d_fake_loss: 0.6218 - d_acc: 0.6002 - kl_divergence: 4.7679
Epoch 11/300
391/391 [==============================] - 54s 136ms/step - g_loss: 0.9575 - d_real_loss: 0.6317 - d_fake_loss: 0.6053 - d_acc: 0.6439 - kl_divergence: 4.7104
Epoch 12/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.0055 - d_real_loss: 0.6199 - d_fake_loss: 0.5913 - d_acc: 0.6604 - kl_divergence: 4.7501
Epoch 13/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.0500 - d_real_loss: 0.6079 - d_fake_loss: 0.5668 - d_acc: 0.6734 - kl_divergence: 4.6159
Epoch 14/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.0445 - d_real_loss: 0.6207 - d_fake_loss: 0.5814 - d_acc: 0.6557 - kl_divergence: 4.5507
Epoch 15/300
391/391 [==============================] - 53s 136ms/step - g_loss: 0.9787 - d_real_loss: 0.6225 - d_fake_loss: 0.5938 - d_acc: 0.6402 - kl_divergence: 4.6190
Epoch 16/300
391/391 [==============================] - 53s 136ms/step - g_loss: 0.9786 - d_real_loss: 0.6220 - d_fake_loss: 0.5941 - d_acc: 0.6370 - kl_divergence: 4.7997
Epoch 17/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.0107 - d_real_loss: 0.6189 - d_fake_loss: 0.5892 - d_acc: 0.6417 - kl_divergence: 4.7570
Epoch 18/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.0108 - d_real_loss: 0.6148 - d_fake_loss: 0.5811 - d_acc: 0.6520 - kl_divergence: 4.6962
Epoch 19/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.0049 - d_real_loss: 0.6106 - d_fake_loss: 0.5808 - d_acc: 0.6536 - kl_divergence: 4.7024
Epoch 20/300
391/391 [==============================] - ETA: 0s - g_loss: 1.0419 - d_real_loss: 0.6108 - d_fake_loss: 0.5720 - d_acc: 0.6555 - kl_divergence: 4.8033
Saving Model Weights At Epoch 20.

391/391 [==============================] - 62s 159ms/step - g_loss: 1.0419 - d_real_loss: 0.6108 - d_fake_loss: 0.5720 - d_acc: 0.6555 - kl_divergence: 4.8032
Epoch 21/300
391/391 [==============================] - 54s 136ms/step - g_loss: 0.9973 - d_real_loss: 0.6141 - d_fake_loss: 0.5798 - d_acc: 0.6507 - kl_divergence: 4.6207
Epoch 22/300
391/391 [==============================] - 53s 137ms/step - g_loss: 0.9577 - d_real_loss: 0.6161 - d_fake_loss: 0.5829 - d_acc: 0.6447 - kl_divergence: 4.7530
Epoch 23/300
391/391 [==============================] - 53s 137ms/step - g_loss: 0.9501 - d_real_loss: 0.6174 - d_fake_loss: 0.5865 - d_acc: 0.6391 - kl_divergence: 4.7538
Epoch 24/300
391/391 [==============================] - 53s 137ms/step - g_loss: 0.9505 - d_real_loss: 0.6227 - d_fake_loss: 0.5955 - d_acc: 0.6317 - kl_divergence: 4.7109
Epoch 25/300
391/391 [==============================] - 53s 137ms/step - g_loss: 0.9505 - d_real_loss: 0.6280 - d_fake_loss: 0.5982 - d_acc: 0.6300 - kl_divergence: 4.7671
Epoch 26/300
391/391 [==============================] - 54s 137ms/step - g_loss: 0.9136 - d_real_loss: 0.6242 - d_fake_loss: 0.5971 - d_acc: 0.6241 - kl_divergence: 4.6492
Epoch 27/300
391/391 [==============================] - 54s 137ms/step - g_loss: 0.9247 - d_real_loss: 0.6285 - d_fake_loss: 0.6006 - d_acc: 0.6193 - kl_divergence: 4.6648
Epoch 28/300
391/391 [==============================] - 54s 137ms/step - g_loss: 0.9160 - d_real_loss: 0.6252 - d_fake_loss: 0.5975 - d_acc: 0.6205 - kl_divergence: 4.7086
Epoch 29/300
391/391 [==============================] - 54s 137ms/step - g_loss: 0.9193 - d_real_loss: 0.6260 - d_fake_loss: 0.5988 - d_acc: 0.6179 - kl_divergence: 4.7536
Epoch 30/300
391/391 [==============================] - ETA: 0s - g_loss: 0.9320 - d_real_loss: 0.6209 - d_fake_loss: 0.5921 - d_acc: 0.6280 - kl_divergence: 4.6562
Saving Model Weights At Epoch 30.

391/391 [==============================] - 62s 158ms/step - g_loss: 0.9320 - d_real_loss: 0.6209 - d_fake_loss: 0.5921 - d_acc: 0.6280 - kl_divergence: 4.6562
Epoch 31/300
391/391 [==============================] - 54s 136ms/step - g_loss: 0.9353 - d_real_loss: 0.6213 - d_fake_loss: 0.5920 - d_acc: 0.6241 - kl_divergence: 4.6970
Epoch 32/300
391/391 [==============================] - 54s 137ms/step - g_loss: 0.9513 - d_real_loss: 0.6170 - d_fake_loss: 0.5849 - d_acc: 0.6290 - kl_divergence: 4.7651
Epoch 33/300
391/391 [==============================] - 53s 137ms/step - g_loss: 0.9644 - d_real_loss: 0.6132 - d_fake_loss: 0.5793 - d_acc: 0.6322 - kl_divergence: 4.7505
Epoch 34/300
391/391 [==============================] - 53s 137ms/step - g_loss: 0.9670 - d_real_loss: 0.6115 - d_fake_loss: 0.5777 - d_acc: 0.6371 - kl_divergence: 4.7010
Epoch 35/300
391/391 [==============================] - 53s 137ms/step - g_loss: 0.9699 - d_real_loss: 0.6060 - d_fake_loss: 0.5738 - d_acc: 0.6409 - kl_divergence: 4.7095
Epoch 36/300
391/391 [==============================] - 53s 137ms/step - g_loss: 0.9787 - d_real_loss: 0.6041 - d_fake_loss: 0.5695 - d_acc: 0.6450 - kl_divergence: 4.7473
Epoch 37/300
391/391 [==============================] - 53s 136ms/step - g_loss: 0.9843 - d_real_loss: 0.6036 - d_fake_loss: 0.5667 - d_acc: 0.6453 - kl_divergence: 4.7293
Epoch 38/300
391/391 [==============================] - 53s 136ms/step - g_loss: 0.9873 - d_real_loss: 0.5985 - d_fake_loss: 0.5618 - d_acc: 0.6504 - kl_divergence: 4.7075
Epoch 39/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.0024 - d_real_loss: 0.5951 - d_fake_loss: 0.5574 - d_acc: 0.6566 - kl_divergence: 4.7323
Epoch 40/300
391/391 [==============================] - ETA: 0s - g_loss: 1.0099 - d_real_loss: 0.5936 - d_fake_loss: 0.5556 - d_acc: 0.6591 - kl_divergence: 4.7177
Saving Model Weights At Epoch 40.

391/391 [==============================] - 61s 157ms/step - g_loss: 1.0099 - d_real_loss: 0.5936 - d_fake_loss: 0.5556 - d_acc: 0.6591 - kl_divergence: 4.7176
Epoch 41/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.0138 - d_real_loss: 0.5915 - d_fake_loss: 0.5522 - d_acc: 0.6605 - kl_divergence: 4.7023
Epoch 42/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.0241 - d_real_loss: 0.5906 - d_fake_loss: 0.5519 - d_acc: 0.6631 - kl_divergence: 4.7416
Epoch 43/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.0280 - d_real_loss: 0.5876 - d_fake_loss: 0.5465 - d_acc: 0.6668 - kl_divergence: 4.7165
Epoch 44/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.0263 - d_real_loss: 0.5850 - d_fake_loss: 0.5452 - d_acc: 0.6697 - kl_divergence: 4.7632
Epoch 45/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.0369 - d_real_loss: 0.5844 - d_fake_loss: 0.5441 - d_acc: 0.6714 - kl_divergence: 4.7343
Epoch 46/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.0507 - d_real_loss: 0.5821 - d_fake_loss: 0.5406 - d_acc: 0.6731 - kl_divergence: 4.7496
Epoch 47/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.0542 - d_real_loss: 0.5775 - d_fake_loss: 0.5367 - d_acc: 0.6800 - kl_divergence: 4.7747
Epoch 48/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.0694 - d_real_loss: 0.5769 - d_fake_loss: 0.5355 - d_acc: 0.6823 - kl_divergence: 4.7409
Epoch 49/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.0811 - d_real_loss: 0.5729 - d_fake_loss: 0.5313 - d_acc: 0.6866 - kl_divergence: 4.7557
Epoch 50/300
1/1 [==============================] - 2s 2s/steps - g_loss: 1.0805 - d_real_loss: 0.5722 - d_fake_loss: 0.5285 - d_acc: 0.6888 - kl_divergence: 4.67
1/1 [==============================] - 0s 27ms/step
1/1 [==============================] - 0s 29ms/step
1/1 [==============================] - 0s 29ms/step
1/1 [==============================] - 0s 43ms/step
1/1 [==============================] - 0s 30ms/step
1/1 [==============================] - 0s 32ms/step
1/1 [==============================] - 0s 28ms/step
1/1 [==============================] - 0s 45ms/step
1/1 [==============================] - 0s 33ms/step
4/4 [==============================] - 3s 190ms/step
4/4 [==============================] - 1s 158ms/step
Epoch 50: Average (IS): 2.5563430786132812 | Std (IS): 0.35864564776420593 | FID Score: 220.52706244936834

Saving Model Weights At Epoch 50.

391/391 [==============================] - 80s 205ms/step - g_loss: 1.0805 - d_real_loss: 0.5722 - d_fake_loss: 0.5285 - d_acc: 0.6888 - kl_divergence: 4.6725
Epoch 51/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.0867 - d_real_loss: 0.5678 - d_fake_loss: 0.5259 - d_acc: 0.6927 - kl_divergence: 4.7326
Epoch 52/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.1049 - d_real_loss: 0.5670 - d_fake_loss: 0.5233 - d_acc: 0.6965 - kl_divergence: 4.7487
Epoch 53/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.1055 - d_real_loss: 0.5629 - d_fake_loss: 0.5201 - d_acc: 0.6981 - kl_divergence: 4.7045
Epoch 54/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.1119 - d_real_loss: 0.5608 - d_fake_loss: 0.5171 - d_acc: 0.7014 - kl_divergence: 4.7489
Epoch 55/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.1312 - d_real_loss: 0.5575 - d_fake_loss: 0.5115 - d_acc: 0.7053 - kl_divergence: 4.7311
Epoch 56/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.1436 - d_real_loss: 0.5562 - d_fake_loss: 0.5107 - d_acc: 0.7052 - kl_divergence: 4.7421
Epoch 57/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.1540 - d_real_loss: 0.5533 - d_fake_loss: 0.5083 - d_acc: 0.7098 - kl_divergence: 4.7563
Epoch 58/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.1653 - d_real_loss: 0.5504 - d_fake_loss: 0.5048 - d_acc: 0.7133 - kl_divergence: 4.7074
Epoch 59/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.1653 - d_real_loss: 0.5457 - d_fake_loss: 0.5014 - d_acc: 0.7194 - kl_divergence: 4.7642
Epoch 60/300
391/391 [==============================] - ETA: 0s - g_loss: 1.1791 - d_real_loss: 0.5452 - d_fake_loss: 0.5018 - d_acc: 0.7218 - kl_divergence: 4.7102
Saving Model Weights At Epoch 60.

391/391 [==============================] - 61s 157ms/step - g_loss: 1.1791 - d_real_loss: 0.5452 - d_fake_loss: 0.5018 - d_acc: 0.7218 - kl_divergence: 4.7102
Epoch 61/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.1852 - d_real_loss: 0.5426 - d_fake_loss: 0.4979 - d_acc: 0.7259 - kl_divergence: 4.7015
Epoch 62/300
391/391 [==============================] - 57s 147ms/step - g_loss: 1.1990 - d_real_loss: 0.5390 - d_fake_loss: 0.4947 - d_acc: 0.7280 - kl_divergence: 4.7593
Epoch 63/300
391/391 [==============================] - 60s 152ms/step - g_loss: 1.2082 - d_real_loss: 0.5351 - d_fake_loss: 0.4900 - d_acc: 0.7339 - kl_divergence: 4.7664
Epoch 64/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.2234 - d_real_loss: 0.5321 - d_fake_loss: 0.4871 - d_acc: 0.7370 - kl_divergence: 4.7393
Epoch 65/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.2330 - d_real_loss: 0.5332 - d_fake_loss: 0.4869 - d_acc: 0.7361 - kl_divergence: 4.7922
Epoch 66/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.2470 - d_real_loss: 0.5271 - d_fake_loss: 0.4808 - d_acc: 0.7430 - kl_divergence: 4.7293
Epoch 67/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.2477 - d_real_loss: 0.5249 - d_fake_loss: 0.4801 - d_acc: 0.7463 - kl_divergence: 4.7555
Epoch 68/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.2628 - d_real_loss: 0.5224 - d_fake_loss: 0.4767 - d_acc: 0.7488 - kl_divergence: 4.7038
Epoch 69/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.2807 - d_real_loss: 0.5189 - d_fake_loss: 0.4749 - d_acc: 0.7526 - kl_divergence: 4.7653
Epoch 70/300
391/391 [==============================] - ETA: 0s - g_loss: 1.2677 - d_real_loss: 0.5180 - d_fake_loss: 0.4741 - d_acc: 0.7535 - kl_divergence: 4.7794
Saving Model Weights At Epoch 70.

391/391 [==============================] - 62s 158ms/step - g_loss: 1.2677 - d_real_loss: 0.5180 - d_fake_loss: 0.4741 - d_acc: 0.7535 - kl_divergence: 4.7793
Epoch 71/300
391/391 [==============================] - 54s 136ms/step - g_loss: 1.2846 - d_real_loss: 0.5152 - d_fake_loss: 0.4707 - d_acc: 0.7571 - kl_divergence: 4.7916
Epoch 72/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.3094 - d_real_loss: 0.5128 - d_fake_loss: 0.4697 - d_acc: 0.7618 - kl_divergence: 4.7329
Epoch 73/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.2993 - d_real_loss: 0.5117 - d_fake_loss: 0.4697 - d_acc: 0.7634 - kl_divergence: 4.7136
Epoch 74/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.3171 - d_real_loss: 0.5102 - d_fake_loss: 0.4669 - d_acc: 0.7638 - kl_divergence: 4.7323
Epoch 75/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.3192 - d_real_loss: 0.5066 - d_fake_loss: 0.4630 - d_acc: 0.7682 - kl_divergence: 4.8228
Epoch 76/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.3322 - d_real_loss: 0.5022 - d_fake_loss: 0.4591 - d_acc: 0.7738 - kl_divergence: 4.7086
Epoch 77/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.3251 - d_real_loss: 0.5040 - d_fake_loss: 0.4607 - d_acc: 0.7704 - kl_divergence: 4.7543
Epoch 78/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.3484 - d_real_loss: 0.4970 - d_fake_loss: 0.4557 - d_acc: 0.7798 - kl_divergence: 4.7334
Epoch 79/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.3631 - d_real_loss: 0.4956 - d_fake_loss: 0.4563 - d_acc: 0.7813 - kl_divergence: 4.7475
Epoch 80/300
391/391 [==============================] - ETA: 0s - g_loss: 1.3809 - d_real_loss: 0.4918 - d_fake_loss: 0.4495 - d_acc: 0.7844 - kl_divergence: 4.7707
Saving Model Weights At Epoch 80.

391/391 [==============================] - 61s 157ms/step - g_loss: 1.3809 - d_real_loss: 0.4918 - d_fake_loss: 0.4495 - d_acc: 0.7844 - kl_divergence: 4.7706
Epoch 81/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.3743 - d_real_loss: 0.4932 - d_fake_loss: 0.4518 - d_acc: 0.7831 - kl_divergence: 4.7601
Epoch 82/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.3935 - d_real_loss: 0.4888 - d_fake_loss: 0.4489 - d_acc: 0.7863 - kl_divergence: 4.7933
Epoch 83/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.3891 - d_real_loss: 0.4874 - d_fake_loss: 0.4471 - d_acc: 0.7897 - kl_divergence: 4.7672
Epoch 84/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.4256 - d_real_loss: 0.4800 - d_fake_loss: 0.4401 - d_acc: 0.7966 - kl_divergence: 4.7855
Epoch 85/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.3979 - d_real_loss: 0.4824 - d_fake_loss: 0.4428 - d_acc: 0.7932 - kl_divergence: 4.7530
Epoch 86/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.4248 - d_real_loss: 0.4800 - d_fake_loss: 0.4416 - d_acc: 0.7965 - kl_divergence: 4.8214
Epoch 87/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.4252 - d_real_loss: 0.4787 - d_fake_loss: 0.4391 - d_acc: 0.7997 - kl_divergence: 4.7695
Epoch 88/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.4310 - d_real_loss: 0.4728 - d_fake_loss: 0.4348 - d_acc: 0.8056 - kl_divergence: 4.8359
Epoch 89/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.4585 - d_real_loss: 0.4720 - d_fake_loss: 0.4344 - d_acc: 0.8049 - kl_divergence: 4.7627
Epoch 90/300
391/391 [==============================] - ETA: 0s - g_loss: 1.4549 - d_real_loss: 0.4673 - d_fake_loss: 0.4293 - d_acc: 0.8094 - kl_divergence: 4.7236
Saving Model Weights At Epoch 90.

391/391 [==============================] - 62s 158ms/step - g_loss: 1.4549 - d_real_loss: 0.4673 - d_fake_loss: 0.4293 - d_acc: 0.8094 - kl_divergence: 4.7236
Epoch 91/300
391/391 [==============================] - 54s 136ms/step - g_loss: 1.5031 - d_real_loss: 0.4653 - d_fake_loss: 0.4283 - d_acc: 0.8129 - kl_divergence: 4.7680
Epoch 92/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.4976 - d_real_loss: 0.4622 - d_fake_loss: 0.4249 - d_acc: 0.8152 - kl_divergence: 4.7808
Epoch 93/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.5334 - d_real_loss: 0.4596 - d_fake_loss: 0.4225 - d_acc: 0.8185 - kl_divergence: 4.7331
Epoch 94/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.5137 - d_real_loss: 0.4548 - d_fake_loss: 0.4188 - d_acc: 0.8229 - kl_divergence: 4.7497
Epoch 95/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.5386 - d_real_loss: 0.4552 - d_fake_loss: 0.4193 - d_acc: 0.8229 - kl_divergence: 4.6667
Epoch 96/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.5346 - d_real_loss: 0.4568 - d_fake_loss: 0.4224 - d_acc: 0.8204 - kl_divergence: 4.7457
Epoch 97/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.5246 - d_real_loss: 0.4519 - d_fake_loss: 0.4176 - d_acc: 0.8249 - kl_divergence: 4.7452
Epoch 98/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.5434 - d_real_loss: 0.4487 - d_fake_loss: 0.4161 - d_acc: 0.8286 - kl_divergence: 4.7768
Epoch 99/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.5618 - d_real_loss: 0.4439 - d_fake_loss: 0.4098 - d_acc: 0.8351 - kl_divergence: 4.7997
Epoch 100/300
1/1 [==============================] - 1s 1s/steps - g_loss: 1.5666 - d_real_loss: 0.4432 - d_fake_loss: 0.4128 - d_acc: 0.8352 - kl_divergence: 4.78
1/1 [==============================] - 0s 41ms/step
1/1 [==============================] - 0s 45ms/step
1/1 [==============================] - 0s 40ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 44ms/step
1/1 [==============================] - 0s 29ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 38ms/step
1/1 [==============================] - 0s 44ms/step
4/4 [==============================] - 2s 248ms/step
4/4 [==============================] - 1s 154ms/step
Epoch 100: Average (IS): 2.5885984897613525 | Std (IS): 0.36903247237205505 | FID Score: 226.33103216578436

Saving Model Weights At Epoch 100.

391/391 [==============================] - 77s 198ms/step - g_loss: 1.5666 - d_real_loss: 0.4432 - d_fake_loss: 0.4128 - d_acc: 0.8352 - kl_divergence: 4.7805
Epoch 101/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.5987 - d_real_loss: 0.4376 - d_fake_loss: 0.4046 - d_acc: 0.8394 - kl_divergence: 4.7519
Epoch 102/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.6085 - d_real_loss: 0.4369 - d_fake_loss: 0.4049 - d_acc: 0.8409 - kl_divergence: 4.7568
Epoch 103/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.6263 - d_real_loss: 0.4336 - d_fake_loss: 0.4017 - d_acc: 0.8449 - kl_divergence: 4.6816
Epoch 104/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.6156 - d_real_loss: 0.4319 - d_fake_loss: 0.4020 - d_acc: 0.8474 - kl_divergence: 4.7309
Epoch 105/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.6377 - d_real_loss: 0.4287 - d_fake_loss: 0.3984 - d_acc: 0.8487 - kl_divergence: 4.7170
Epoch 106/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.6414 - d_real_loss: 0.4262 - d_fake_loss: 0.3971 - d_acc: 0.8519 - kl_divergence: 4.7317
Epoch 107/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.6663 - d_real_loss: 0.4252 - d_fake_loss: 0.3993 - d_acc: 0.8525 - kl_divergence: 4.6848
Epoch 108/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.7117 - d_real_loss: 0.4232 - d_fake_loss: 0.3946 - d_acc: 0.8547 - kl_divergence: 4.8193
Epoch 109/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.6743 - d_real_loss: 0.4237 - d_fake_loss: 0.3966 - d_acc: 0.8548 - kl_divergence: 4.7372
Epoch 110/300
391/391 [==============================] - ETA: 0s - g_loss: 1.7169 - d_real_loss: 0.4153 - d_fake_loss: 0.3885 - d_acc: 0.8610 - kl_divergence: 4.6860
Saving Model Weights At Epoch 110.

391/391 [==============================] - 61s 155ms/step - g_loss: 1.7169 - d_real_loss: 0.4153 - d_fake_loss: 0.3885 - d_acc: 0.8610 - kl_divergence: 4.6861
Epoch 111/300
391/391 [==============================] - 54s 136ms/step - g_loss: 1.6981 - d_real_loss: 0.4174 - d_fake_loss: 0.3905 - d_acc: 0.8601 - kl_divergence: 4.6928
Epoch 112/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.7254 - d_real_loss: 0.4162 - d_fake_loss: 0.3899 - d_acc: 0.8627 - kl_divergence: 4.7003
Epoch 113/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.6848 - d_real_loss: 0.4139 - d_fake_loss: 0.3899 - d_acc: 0.8630 - kl_divergence: 4.7956
Epoch 114/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.7266 - d_real_loss: 0.4100 - d_fake_loss: 0.3869 - d_acc: 0.8679 - kl_divergence: 4.7887
Epoch 115/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.7733 - d_real_loss: 0.4042 - d_fake_loss: 0.3813 - d_acc: 0.8726 - kl_divergence: 4.7580
Epoch 116/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.7701 - d_real_loss: 0.4044 - d_fake_loss: 0.3797 - d_acc: 0.8729 - kl_divergence: 4.7194
Epoch 117/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.7502 - d_real_loss: 0.4075 - d_fake_loss: 0.3842 - d_acc: 0.8689 - kl_divergence: 4.7044
Epoch 118/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.7922 - d_real_loss: 0.4006 - d_fake_loss: 0.3793 - d_acc: 0.8766 - kl_divergence: 4.7157
Epoch 119/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.7897 - d_real_loss: 0.3969 - d_fake_loss: 0.3758 - d_acc: 0.8796 - kl_divergence: 4.6925
Epoch 120/300
391/391 [==============================] - ETA: 0s - g_loss: 1.8096 - d_real_loss: 0.3942 - d_fake_loss: 0.3745 - d_acc: 0.8839 - kl_divergence: 4.6649
Saving Model Weights At Epoch 120.

391/391 [==============================] - 63s 160ms/step - g_loss: 1.8096 - d_real_loss: 0.3942 - d_fake_loss: 0.3745 - d_acc: 0.8839 - kl_divergence: 4.6648
Epoch 121/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.8150 - d_real_loss: 0.3920 - d_fake_loss: 0.3729 - d_acc: 0.8861 - kl_divergence: 4.7242
Epoch 122/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.8625 - d_real_loss: 0.3863 - d_fake_loss: 0.3669 - d_acc: 0.8905 - kl_divergence: 4.7372
Epoch 123/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.8081 - d_real_loss: 0.3825 - d_fake_loss: 0.3658 - d_acc: 0.8948 - kl_divergence: 4.7151
Epoch 124/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.9113 - d_real_loss: 0.3764 - d_fake_loss: 0.3578 - d_acc: 0.8997 - kl_divergence: 4.7163
Epoch 125/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.9031 - d_real_loss: 0.3774 - d_fake_loss: 0.3584 - d_acc: 0.8983 - kl_divergence: 4.7772
Epoch 126/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.8956 - d_real_loss: 0.3710 - d_fake_loss: 0.3546 - d_acc: 0.9041 - kl_divergence: 4.6803
Epoch 127/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.9199 - d_real_loss: 0.3727 - d_fake_loss: 0.3583 - d_acc: 0.9040 - kl_divergence: 4.7382
Epoch 128/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.9408 - d_real_loss: 0.3665 - d_fake_loss: 0.3510 - d_acc: 0.9089 - kl_divergence: 4.7290
Epoch 129/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.9754 - d_real_loss: 0.3584 - d_fake_loss: 0.3433 - d_acc: 0.9164 - kl_divergence: 4.6596
Epoch 130/300
391/391 [==============================] - ETA: 0s - g_loss: 2.0160 - d_real_loss: 0.3548 - d_fake_loss: 0.3356 - d_acc: 0.9186 - kl_divergence: 4.7089
Saving Model Weights At Epoch 130.

391/391 [==============================] - 60s 154ms/step - g_loss: 2.0160 - d_real_loss: 0.3548 - d_fake_loss: 0.3356 - d_acc: 0.9186 - kl_divergence: 4.7090
Epoch 131/300
391/391 [==============================] - 53s 136ms/step - g_loss: 2.0486 - d_real_loss: 0.3540 - d_fake_loss: 0.3348 - d_acc: 0.9190 - kl_divergence: 4.7711
Epoch 132/300
391/391 [==============================] - 53s 136ms/step - g_loss: 2.0736 - d_real_loss: 0.3505 - d_fake_loss: 0.3273 - d_acc: 0.9208 - kl_divergence: 4.6466
Epoch 133/300
391/391 [==============================] - 53s 137ms/step - g_loss: 2.1158 - d_real_loss: 0.3391 - d_fake_loss: 0.3141 - d_acc: 0.9317 - kl_divergence: 4.8266
Epoch 134/300
391/391 [==============================] - 53s 136ms/step - g_loss: 2.3472 - d_real_loss: 0.3372 - d_fake_loss: 0.2967 - d_acc: 0.9340 - kl_divergence: 4.4984
Epoch 135/300
391/391 [==============================] - 53s 137ms/step - g_loss: 2.6330 - d_real_loss: 0.3495 - d_fake_loss: 0.2842 - d_acc: 0.9300 - kl_divergence: 4.5024
Epoch 136/300
391/391 [==============================] - 53s 137ms/step - g_loss: 2.6494 - d_real_loss: 0.3766 - d_fake_loss: 0.2957 - d_acc: 0.9156 - kl_divergence: 4.9377
Epoch 137/300
391/391 [==============================] - 53s 136ms/step - g_loss: 2.4843 - d_real_loss: 0.3805 - d_fake_loss: 0.3025 - d_acc: 0.9064 - kl_divergence: 4.8259
Epoch 138/300
391/391 [==============================] - 53s 137ms/step - g_loss: 2.3480 - d_real_loss: 0.3863 - d_fake_loss: 0.3158 - d_acc: 0.9001 - kl_divergence: 4.7303
Epoch 139/300
391/391 [==============================] - 53s 136ms/step - g_loss: 2.2905 - d_real_loss: 0.3924 - d_fake_loss: 0.3238 - d_acc: 0.8931 - kl_divergence: 4.8787
Epoch 140/300
391/391 [==============================] - ETA: 0s - g_loss: 2.2637 - d_real_loss: 0.3883 - d_fake_loss: 0.3200 - d_acc: 0.8957 - kl_divergence: 4.7944
Saving Model Weights At Epoch 140.

391/391 [==============================] - 61s 157ms/step - g_loss: 2.2637 - d_real_loss: 0.3883 - d_fake_loss: 0.3200 - d_acc: 0.8957 - kl_divergence: 4.7944
Epoch 141/300
391/391 [==============================] - 53s 136ms/step - g_loss: 2.2816 - d_real_loss: 0.3914 - d_fake_loss: 0.3220 - d_acc: 0.8940 - kl_divergence: 4.6190
Epoch 142/300
391/391 [==============================] - 53s 136ms/step - g_loss: 2.1969 - d_real_loss: 0.4008 - d_fake_loss: 0.3329 - d_acc: 0.8857 - kl_divergence: 4.9015
Epoch 143/300
391/391 [==============================] - 53s 136ms/step - g_loss: 2.1013 - d_real_loss: 0.4085 - d_fake_loss: 0.3435 - d_acc: 0.8751 - kl_divergence: 4.8747
Epoch 144/300
391/391 [==============================] - 53s 136ms/step - g_loss: 2.0827 - d_real_loss: 0.4065 - d_fake_loss: 0.3414 - d_acc: 0.8799 - kl_divergence: 4.6136
Epoch 145/300
391/391 [==============================] - 53s 136ms/step - g_loss: 2.1589 - d_real_loss: 0.4032 - d_fake_loss: 0.3365 - d_acc: 0.8819 - kl_divergence: 4.5929
Epoch 146/300
391/391 [==============================] - 53s 136ms/step - g_loss: 2.0767 - d_real_loss: 0.4194 - d_fake_loss: 0.3518 - d_acc: 0.8688 - kl_divergence: 4.6304
Epoch 147/300
391/391 [==============================] - 53s 136ms/step - g_loss: 2.1070 - d_real_loss: 0.4086 - d_fake_loss: 0.3406 - d_acc: 0.8777 - kl_divergence: 4.7231
Epoch 148/300
391/391 [==============================] - 53s 136ms/step - g_loss: 2.0750 - d_real_loss: 0.4147 - d_fake_loss: 0.3489 - d_acc: 0.8734 - kl_divergence: 4.8061
Epoch 149/300
391/391 [==============================] - 53s 136ms/step - g_loss: 2.0378 - d_real_loss: 0.4112 - d_fake_loss: 0.3502 - d_acc: 0.8732 - kl_divergence: 4.5906
Epoch 150/300
1/1 [==============================] - 2s 2s/steps - g_loss: 2.0552 - d_real_loss: 0.4183 - d_fake_loss: 0.3547 - d_acc: 0.8661 - kl_divergence: 4.81
1/1 [==============================] - 0s 58ms/step
1/1 [==============================] - 0s 43ms/step
1/1 [==============================] - 0s 49ms/step
1/1 [==============================] - 0s 48ms/step
1/1 [==============================] - 0s 42ms/step
1/1 [==============================] - 0s 43ms/step
1/1 [==============================] - 0s 42ms/step
1/1 [==============================] - 0s 37ms/step
1/1 [==============================] - 0s 38ms/step
4/4 [==============================] - 2s 237ms/step
4/4 [==============================] - 1s 156ms/step
Epoch 150: Average (IS): 2.4189743995666504 | Std (IS): 0.29598361253738403 | FID Score: 233.23006380998675

Saving Model Weights At Epoch 150.

391/391 [==============================] - 84s 215ms/step - g_loss: 2.0552 - d_real_loss: 0.4183 - d_fake_loss: 0.3547 - d_acc: 0.8661 - kl_divergence: 4.8156
Epoch 151/300
391/391 [==============================] - 53s 135ms/step - g_loss: 2.0383 - d_real_loss: 0.4200 - d_fake_loss: 0.3558 - d_acc: 0.8666 - kl_divergence: 4.7918
Epoch 152/300
391/391 [==============================] - 53s 136ms/step - g_loss: 2.0227 - d_real_loss: 0.4219 - d_fake_loss: 0.3610 - d_acc: 0.8651 - kl_divergence: 4.7208
Epoch 153/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.9711 - d_real_loss: 0.4265 - d_fake_loss: 0.3597 - d_acc: 0.8620 - kl_divergence: 4.7155
Epoch 154/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.9660 - d_real_loss: 0.4303 - d_fake_loss: 0.3641 - d_acc: 0.8577 - kl_divergence: 4.6898
Epoch 155/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.9264 - d_real_loss: 0.4299 - d_fake_loss: 0.3656 - d_acc: 0.8567 - kl_divergence: 4.7755
Epoch 156/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.9102 - d_real_loss: 0.4331 - d_fake_loss: 0.3688 - d_acc: 0.8554 - kl_divergence: 4.7778
Epoch 157/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.9442 - d_real_loss: 0.4275 - d_fake_loss: 0.3628 - d_acc: 0.8588 - kl_divergence: 4.9094
Epoch 158/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.9143 - d_real_loss: 0.4362 - d_fake_loss: 0.3689 - d_acc: 0.8523 - kl_divergence: 4.7301
Epoch 159/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.9017 - d_real_loss: 0.4330 - d_fake_loss: 0.3652 - d_acc: 0.8544 - kl_divergence: 4.8097
Epoch 160/300
391/391 [==============================] - ETA: 0s - g_loss: 1.9093 - d_real_loss: 0.4307 - d_fake_loss: 0.3612 - d_acc: 0.8548 - kl_divergence: 4.7677
Saving Model Weights At Epoch 160.

391/391 [==============================] - 63s 160ms/step - g_loss: 1.9093 - d_real_loss: 0.4307 - d_fake_loss: 0.3612 - d_acc: 0.8548 - kl_divergence: 4.7676
Epoch 161/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.9416 - d_real_loss: 0.4357 - d_fake_loss: 0.3644 - d_acc: 0.8544 - kl_divergence: 4.8283
Epoch 162/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.9314 - d_real_loss: 0.4317 - d_fake_loss: 0.3677 - d_acc: 0.8538 - kl_divergence: 4.7336
Epoch 163/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.8874 - d_real_loss: 0.4356 - d_fake_loss: 0.3694 - d_acc: 0.8517 - kl_divergence: 4.8419
Epoch 164/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.9102 - d_real_loss: 0.4341 - d_fake_loss: 0.3686 - d_acc: 0.8527 - kl_divergence: 4.8742
Epoch 165/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.9345 - d_real_loss: 0.4292 - d_fake_loss: 0.3647 - d_acc: 0.8567 - kl_divergence: 4.7664
Epoch 166/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.8616 - d_real_loss: 0.4319 - d_fake_loss: 0.3637 - d_acc: 0.8568 - kl_divergence: 4.9671
Epoch 167/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.8958 - d_real_loss: 0.4408 - d_fake_loss: 0.3687 - d_acc: 0.8476 - kl_divergence: 4.9425
Epoch 168/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.9038 - d_real_loss: 0.4429 - d_fake_loss: 0.3679 - d_acc: 0.8493 - kl_divergence: 5.0456
Epoch 169/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.8505 - d_real_loss: 0.4376 - d_fake_loss: 0.3662 - d_acc: 0.8511 - kl_divergence: 4.8549
Epoch 170/300
391/391 [==============================] - ETA: 0s - g_loss: 1.8936 - d_real_loss: 0.4397 - d_fake_loss: 0.3693 - d_acc: 0.8493 - kl_divergence: 4.9756
Saving Model Weights At Epoch 170.

391/391 [==============================] - 62s 159ms/step - g_loss: 1.8936 - d_real_loss: 0.4397 - d_fake_loss: 0.3693 - d_acc: 0.8493 - kl_divergence: 4.9755
Epoch 171/300
391/391 [==============================] - 53s 135ms/step - g_loss: 1.7968 - d_real_loss: 0.4379 - d_fake_loss: 0.3688 - d_acc: 0.8481 - kl_divergence: 4.8786
Epoch 172/300
391/391 [==============================] - 53s 135ms/step - g_loss: 1.8556 - d_real_loss: 0.4396 - d_fake_loss: 0.3649 - d_acc: 0.8488 - kl_divergence: 4.8841
Epoch 173/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.8133 - d_real_loss: 0.4455 - d_fake_loss: 0.3758 - d_acc: 0.8425 - kl_divergence: 4.9967
Epoch 174/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.7670 - d_real_loss: 0.4502 - d_fake_loss: 0.3786 - d_acc: 0.8397 - kl_divergence: 4.7573
Epoch 175/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.8214 - d_real_loss: 0.4436 - d_fake_loss: 0.3715 - d_acc: 0.8460 - kl_divergence: 4.7991
Epoch 176/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.8149 - d_real_loss: 0.4430 - d_fake_loss: 0.3725 - d_acc: 0.8470 - kl_divergence: 4.8292
Epoch 177/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.8054 - d_real_loss: 0.4635 - d_fake_loss: 0.3870 - d_acc: 0.8289 - kl_divergence: 4.8674
Epoch 178/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.7430 - d_real_loss: 0.4592 - d_fake_loss: 0.3911 - d_acc: 0.8289 - kl_divergence: 4.9328
Epoch 179/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.7617 - d_real_loss: 0.4506 - d_fake_loss: 0.3826 - d_acc: 0.8377 - kl_divergence: 4.9266
Epoch 180/300
391/391 [==============================] - ETA: 0s - g_loss: 1.7627 - d_real_loss: 0.4606 - d_fake_loss: 0.3882 - d_acc: 0.8297 - kl_divergence: 4.9727
Saving Model Weights At Epoch 180.

391/391 [==============================] - 61s 156ms/step - g_loss: 1.7627 - d_real_loss: 0.4606 - d_fake_loss: 0.3882 - d_acc: 0.8297 - kl_divergence: 4.9725
Epoch 181/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.7500 - d_real_loss: 0.4610 - d_fake_loss: 0.3919 - d_acc: 0.8285 - kl_divergence: 4.8095
Epoch 182/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.7035 - d_real_loss: 0.4561 - d_fake_loss: 0.3868 - d_acc: 0.8321 - kl_divergence: 4.8942
Epoch 183/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.7365 - d_real_loss: 0.4554 - d_fake_loss: 0.3863 - d_acc: 0.8331 - kl_divergence: 4.6762
Epoch 184/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.7313 - d_real_loss: 0.4547 - d_fake_loss: 0.3850 - d_acc: 0.8332 - kl_divergence: 4.8808
Epoch 185/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.7146 - d_real_loss: 0.4619 - d_fake_loss: 0.3855 - d_acc: 0.8292 - kl_divergence: 4.9960
Epoch 186/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.7455 - d_real_loss: 0.4535 - d_fake_loss: 0.3818 - d_acc: 0.8343 - kl_divergence: 4.8990
Epoch 187/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.6931 - d_real_loss: 0.4558 - d_fake_loss: 0.3843 - d_acc: 0.8328 - kl_divergence: 5.0269
Epoch 188/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.7304 - d_real_loss: 0.4700 - d_fake_loss: 0.3929 - d_acc: 0.8222 - kl_divergence: 4.7870
Epoch 189/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.6746 - d_real_loss: 0.4652 - d_fake_loss: 0.3903 - d_acc: 0.8253 - kl_divergence: 4.9735
Epoch 190/300
391/391 [==============================] - ETA: 0s - g_loss: 1.6885 - d_real_loss: 0.4661 - d_fake_loss: 0.3960 - d_acc: 0.8249 - kl_divergence: 4.8491
Saving Model Weights At Epoch 190.

391/391 [==============================] - 61s 155ms/step - g_loss: 1.6885 - d_real_loss: 0.4661 - d_fake_loss: 0.3960 - d_acc: 0.8249 - kl_divergence: 4.8491
Epoch 191/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.6517 - d_real_loss: 0.4721 - d_fake_loss: 0.3995 - d_acc: 0.8175 - kl_divergence: 4.8957
Epoch 192/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.6939 - d_real_loss: 0.4641 - d_fake_loss: 0.3901 - d_acc: 0.8262 - kl_divergence: 4.9939
Epoch 193/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.7381 - d_real_loss: 0.4633 - d_fake_loss: 0.3894 - d_acc: 0.8259 - kl_divergence: 4.9138
Epoch 194/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.6562 - d_real_loss: 0.4719 - d_fake_loss: 0.3939 - d_acc: 0.8191 - kl_divergence: 4.8778
Epoch 195/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.6567 - d_real_loss: 0.4638 - d_fake_loss: 0.3879 - d_acc: 0.8272 - kl_divergence: 4.8063
Epoch 196/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.6953 - d_real_loss: 0.4657 - d_fake_loss: 0.3952 - d_acc: 0.8227 - kl_divergence: 4.8513
Epoch 197/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.6799 - d_real_loss: 0.4549 - d_fake_loss: 0.3789 - d_acc: 0.8357 - kl_divergence: 4.9269
Epoch 198/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.6921 - d_real_loss: 0.4617 - d_fake_loss: 0.3882 - d_acc: 0.8285 - kl_divergence: 4.6780
Epoch 199/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.6620 - d_real_loss: 0.4561 - d_fake_loss: 0.3852 - d_acc: 0.8337 - kl_divergence: 4.6523
Epoch 200/300
1/1 [==============================] - 1s 1s/steps - g_loss: 1.6457 - d_real_loss: 0.4679 - d_fake_loss: 0.3886 - d_acc: 0.8269 - kl_divergence: 4.73
1/1 [==============================] - 0s 36ms/step
1/1 [==============================] - 0s 41ms/step
1/1 [==============================] - 0s 35ms/step
1/1 [==============================] - 0s 35ms/step
1/1 [==============================] - 0s 44ms/step
1/1 [==============================] - 0s 37ms/step
1/1 [==============================] - 0s 50ms/step
1/1 [==============================] - 0s 34ms/step
1/1 [==============================] - 0s 38ms/step
4/4 [==============================] - 2s 253ms/step
4/4 [==============================] - 1s 155ms/step
Epoch 200: Average (IS): 2.1762537956237793 | Std (IS): 0.33091554045677185 | FID Score: 236.76042884820077

Saving Model Weights At Epoch 200.

391/391 [==============================] - 83s 214ms/step - g_loss: 1.6457 - d_real_loss: 0.4679 - d_fake_loss: 0.3886 - d_acc: 0.8269 - kl_divergence: 4.7305
Epoch 201/300
391/391 [==============================] - 54s 136ms/step - g_loss: 1.6034 - d_real_loss: 0.4710 - d_fake_loss: 0.3962 - d_acc: 0.8195 - kl_divergence: 4.8578
Epoch 202/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.6391 - d_real_loss: 0.4699 - d_fake_loss: 0.3931 - d_acc: 0.8227 - kl_divergence: 4.6870
Epoch 203/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.6535 - d_real_loss: 0.4466 - d_fake_loss: 0.3769 - d_acc: 0.8424 - kl_divergence: 4.7728
Epoch 204/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.5968 - d_real_loss: 0.4748 - d_fake_loss: 0.3964 - d_acc: 0.8184 - kl_divergence: 4.9389
Epoch 205/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.5971 - d_real_loss: 0.4931 - d_fake_loss: 0.4121 - d_acc: 0.8043 - kl_divergence: 4.8793
Epoch 206/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.6011 - d_real_loss: 0.4897 - d_fake_loss: 0.4124 - d_acc: 0.8046 - kl_divergence: 4.6599
Epoch 207/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.5791 - d_real_loss: 0.4684 - d_fake_loss: 0.3938 - d_acc: 0.8228 - kl_divergence: 4.7728
Epoch 208/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.5815 - d_real_loss: 0.4734 - d_fake_loss: 0.3954 - d_acc: 0.8172 - kl_divergence: 4.8619
Epoch 209/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.6551 - d_real_loss: 0.4784 - d_fake_loss: 0.4010 - d_acc: 0.8131 - kl_divergence: 4.6020
Epoch 210/300
391/391 [==============================] - ETA: 0s - g_loss: 1.5805 - d_real_loss: 0.4963 - d_fake_loss: 0.4216 - d_acc: 0.7957 - kl_divergence: 4.6139
Saving Model Weights At Epoch 210.

391/391 [==============================] - 61s 155ms/step - g_loss: 1.5805 - d_real_loss: 0.4963 - d_fake_loss: 0.4216 - d_acc: 0.7957 - kl_divergence: 4.6141
Epoch 211/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.5530 - d_real_loss: 0.4970 - d_fake_loss: 0.4199 - d_acc: 0.7971 - kl_divergence: 4.8157
Epoch 212/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.6142 - d_real_loss: 0.4941 - d_fake_loss: 0.4174 - d_acc: 0.7997 - kl_divergence: 4.6861
Epoch 213/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.5305 - d_real_loss: 0.4693 - d_fake_loss: 0.4013 - d_acc: 0.8190 - kl_divergence: 4.7565
Epoch 214/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.5938 - d_real_loss: 0.4857 - d_fake_loss: 0.4087 - d_acc: 0.8096 - kl_divergence: 4.7666
Epoch 215/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.5709 - d_real_loss: 0.4820 - d_fake_loss: 0.4086 - d_acc: 0.8109 - kl_divergence: 4.7292
Epoch 216/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.5234 - d_real_loss: 0.4877 - d_fake_loss: 0.4143 - d_acc: 0.8037 - kl_divergence: 4.8311
Epoch 217/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.5445 - d_real_loss: 0.4808 - d_fake_loss: 0.4077 - d_acc: 0.8084 - kl_divergence: 4.9552
Epoch 218/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.5230 - d_real_loss: 0.4856 - d_fake_loss: 0.4172 - d_acc: 0.8046 - kl_divergence: 4.6557
Epoch 219/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.5136 - d_real_loss: 0.5006 - d_fake_loss: 0.4236 - d_acc: 0.7899 - kl_divergence: 4.7786
Epoch 220/300
391/391 [==============================] - ETA: 0s - g_loss: 1.5338 - d_real_loss: 0.4974 - d_fake_loss: 0.4251 - d_acc: 0.7937 - kl_divergence: 4.5619
Saving Model Weights At Epoch 220.

391/391 [==============================] - 61s 156ms/step - g_loss: 1.5338 - d_real_loss: 0.4974 - d_fake_loss: 0.4251 - d_acc: 0.7937 - kl_divergence: 4.5623
Epoch 221/300
391/391 [==============================] - 54s 136ms/step - g_loss: 1.5268 - d_real_loss: 0.4995 - d_fake_loss: 0.4286 - d_acc: 0.7920 - kl_divergence: 4.7495
Epoch 222/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.5099 - d_real_loss: 0.4957 - d_fake_loss: 0.4199 - d_acc: 0.7964 - kl_divergence: 4.6772
Epoch 223/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.5352 - d_real_loss: 0.4957 - d_fake_loss: 0.4205 - d_acc: 0.7951 - kl_divergence: 4.8490
Epoch 224/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.5085 - d_real_loss: 0.4871 - d_fake_loss: 0.4141 - d_acc: 0.8020 - kl_divergence: 4.6324
Epoch 225/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.5249 - d_real_loss: 0.4935 - d_fake_loss: 0.4227 - d_acc: 0.7974 - kl_divergence: 4.7468
Epoch 226/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.5331 - d_real_loss: 0.4853 - d_fake_loss: 0.4167 - d_acc: 0.8026 - kl_divergence: 4.6646
Epoch 227/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.4840 - d_real_loss: 0.4998 - d_fake_loss: 0.4288 - d_acc: 0.7901 - kl_divergence: 4.9173
Epoch 228/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.5267 - d_real_loss: 0.4853 - d_fake_loss: 0.4149 - d_acc: 0.8027 - kl_divergence: 4.7069
Epoch 229/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.5278 - d_real_loss: 0.4844 - d_fake_loss: 0.4161 - d_acc: 0.8047 - kl_divergence: 4.9738
Epoch 230/300
391/391 [==============================] - ETA: 0s - g_loss: 1.5216 - d_real_loss: 0.4802 - d_fake_loss: 0.4159 - d_acc: 0.8062 - kl_divergence: 4.8430
Saving Model Weights At Epoch 230.

391/391 [==============================] - 64s 164ms/step - g_loss: 1.5216 - d_real_loss: 0.4802 - d_fake_loss: 0.4159 - d_acc: 0.8062 - kl_divergence: 4.8429
Epoch 231/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.5420 - d_real_loss: 0.4993 - d_fake_loss: 0.4304 - d_acc: 0.7884 - kl_divergence: 4.8574
Epoch 232/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.5026 - d_real_loss: 0.4899 - d_fake_loss: 0.4206 - d_acc: 0.7985 - kl_divergence: 4.8006
Epoch 233/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.5658 - d_real_loss: 0.4783 - d_fake_loss: 0.4060 - d_acc: 0.8112 - kl_divergence: 4.7853
Epoch 234/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.5477 - d_real_loss: 0.4950 - d_fake_loss: 0.4299 - d_acc: 0.7917 - kl_divergence: 5.0071
Epoch 235/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.5475 - d_real_loss: 0.4866 - d_fake_loss: 0.4227 - d_acc: 0.8005 - kl_divergence: 4.6797
Epoch 236/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.4925 - d_real_loss: 0.4981 - d_fake_loss: 0.4330 - d_acc: 0.7890 - kl_divergence: 4.7540
Epoch 237/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.5536 - d_real_loss: 0.5023 - d_fake_loss: 0.4314 - d_acc: 0.7851 - kl_divergence: 4.7633
Epoch 238/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.4907 - d_real_loss: 0.4966 - d_fake_loss: 0.4272 - d_acc: 0.7916 - kl_divergence: 4.8126
Epoch 239/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.5396 - d_real_loss: 0.4844 - d_fake_loss: 0.4181 - d_acc: 0.8034 - kl_divergence: 4.7917
Epoch 240/300
391/391 [==============================] - ETA: 0s - g_loss: 1.5209 - d_real_loss: 0.4760 - d_fake_loss: 0.4110 - d_acc: 0.8090 - kl_divergence: 4.6452
Saving Model Weights At Epoch 240.

391/391 [==============================] - 62s 159ms/step - g_loss: 1.5209 - d_real_loss: 0.4760 - d_fake_loss: 0.4110 - d_acc: 0.8090 - kl_divergence: 4.6453
Epoch 241/300
391/391 [==============================] - 54s 136ms/step - g_loss: 1.5007 - d_real_loss: 0.4927 - d_fake_loss: 0.4269 - d_acc: 0.7932 - kl_divergence: 4.7813
Epoch 242/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.5181 - d_real_loss: 0.4742 - d_fake_loss: 0.4105 - d_acc: 0.8110 - kl_divergence: 4.7740
Epoch 243/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.5487 - d_real_loss: 0.4755 - d_fake_loss: 0.4063 - d_acc: 0.8114 - kl_divergence: 4.5981
Epoch 244/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.5538 - d_real_loss: 0.4877 - d_fake_loss: 0.4201 - d_acc: 0.8014 - kl_divergence: 4.7233
Epoch 245/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.5333 - d_real_loss: 0.4888 - d_fake_loss: 0.4253 - d_acc: 0.7987 - kl_divergence: 4.8391
Epoch 246/300
391/391 [==============================] - 54s 138ms/step - g_loss: 1.5201 - d_real_loss: 0.4803 - d_fake_loss: 0.4149 - d_acc: 0.8070 - kl_divergence: 4.7456
Epoch 247/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.5024 - d_real_loss: 0.4776 - d_fake_loss: 0.4147 - d_acc: 0.8095 - kl_divergence: 4.8490
Epoch 248/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.5357 - d_real_loss: 0.4832 - d_fake_loss: 0.4175 - d_acc: 0.8034 - kl_divergence: 4.8287
Epoch 249/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.5396 - d_real_loss: 0.4907 - d_fake_loss: 0.4240 - d_acc: 0.7973 - kl_divergence: 4.8971
Epoch 250/300
1/1 [==============================] - 1s 1s/steps - g_loss: 1.5178 - d_real_loss: 0.4900 - d_fake_loss: 0.4256 - d_acc: 0.7979 - kl_divergence: 4.86
1/1 [==============================] - 0s 43ms/step
1/1 [==============================] - 0s 35ms/step
1/1 [==============================] - 0s 38ms/step
1/1 [==============================] - 0s 67ms/step
1/1 [==============================] - 0s 42ms/step
1/1 [==============================] - 0s 55ms/step
1/1 [==============================] - 0s 59ms/step
1/1 [==============================] - 0s 45ms/step
1/1 [==============================] - 0s 61ms/step
4/4 [==============================] - 3s 256ms/step
4/4 [==============================] - 1s 153ms/step
Epoch 250: Average (IS): 2.290245294570923 | Std (IS): 0.3039969503879547 | FID Score: 217.20405105700033

Saving Model Weights At Epoch 250.

391/391 [==============================] - 88s 226ms/step - g_loss: 1.5178 - d_real_loss: 0.4900 - d_fake_loss: 0.4256 - d_acc: 0.7979 - kl_divergence: 4.8618
Epoch 251/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.5145 - d_real_loss: 0.4846 - d_fake_loss: 0.4264 - d_acc: 0.8000 - kl_divergence: 4.8023
Epoch 252/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.4963 - d_real_loss: 0.4887 - d_fake_loss: 0.4227 - d_acc: 0.7988 - kl_divergence: 4.7410
Epoch 253/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.5446 - d_real_loss: 0.4885 - d_fake_loss: 0.4208 - d_acc: 0.7968 - kl_divergence: 4.9571
Epoch 254/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.4879 - d_real_loss: 0.4808 - d_fake_loss: 0.4201 - d_acc: 0.8048 - kl_divergence: 4.7182
Epoch 255/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.5348 - d_real_loss: 0.4741 - d_fake_loss: 0.4101 - d_acc: 0.8118 - kl_divergence: 4.8722
Epoch 256/300
391/391 [==============================] - 53s 137ms/step - g_loss: 1.5419 - d_real_loss: 0.4806 - d_fake_loss: 0.4162 - d_acc: 0.8070 - kl_divergence: 4.7705
Epoch 257/300
391/391 [==============================] - 53s 136ms/step - g_loss: 1.5398 - d_real_loss: 0.4709 - d_fake_loss: 0.4082 - d_acc: 0.8119 - kl_divergence: 4.8011
Epoch 258/300
391/391 [==============================] - 208s 533ms/step - g_loss: 1.5506 - d_real_loss: 0.4735 - d_fake_loss: 0.4113 - d_acc: 0.8122 - kl_divergence: 4.7293
Epoch 259/300
391/391 [==============================] - 253s 646ms/step - g_loss: 1.5500 - d_real_loss: 0.4770 - d_fake_loss: 0.4100 - d_acc: 0.8073 - kl_divergence: 4.9482
Epoch 260/300
391/391 [==============================] - ETA: 0s - g_loss: 1.5208 - d_real_loss: 0.4784 - d_fake_loss: 0.4163 - d_acc: 0.8058 - kl_divergence: 4.8223
Saving Model Weights At Epoch 260.

391/391 [==============================] - 270s 691ms/step - g_loss: 1.5208 - d_real_loss: 0.4784 - d_fake_loss: 0.4163 - d_acc: 0.8058 - kl_divergence: 4.8223
Epoch 261/300
391/391 [==============================] - 228s 583ms/step - g_loss: 1.5336 - d_real_loss: 0.4676 - d_fake_loss: 0.4117 - d_acc: 0.8163 - kl_divergence: 4.6791
Epoch 262/300
391/391 [==============================] - 253s 646ms/step - g_loss: 1.5588 - d_real_loss: 0.4709 - d_fake_loss: 0.4084 - d_acc: 0.8125 - kl_divergence: 4.9301
Epoch 263/300
391/391 [==============================] - 252s 645ms/step - g_loss: 1.5668 - d_real_loss: 0.4665 - d_fake_loss: 0.4014 - d_acc: 0.8177 - kl_divergence: 4.8051
Epoch 264/300
391/391 [==============================] - 89s 226ms/step - g_loss: 1.5506 - d_real_loss: 0.4602 - d_fake_loss: 0.3992 - d_acc: 0.8247 - kl_divergence: 4.7766
Epoch 265/300
391/391 [==============================] - 54s 137ms/step - g_loss: 1.5403 - d_real_loss: 0.4658 - d_fake_loss: 0.4033 - d_acc: 0.8171 - kl_divergence: 4.8491
Epoch 266/300
391/391 [==============================] - 54s 138ms/step - g_loss: 1.5438 - d_real_loss: 0.4712 - d_fake_loss: 0.4106 - d_acc: 0.8155 - kl_divergence: 4.7914
Epoch 267/300
391/391 [==============================] - 54s 138ms/step - g_loss: 1.5621 - d_real_loss: 0.4790 - d_fake_loss: 0.4145 - d_acc: 0.8066 - kl_divergence: 4.7692
Epoch 268/300
391/391 [==============================] - 54s 138ms/step - g_loss: 1.5842 - d_real_loss: 0.4869 - d_fake_loss: 0.4213 - d_acc: 0.8010 - kl_divergence: 4.8327
Epoch 269/300
391/391 [==============================] - 54s 139ms/step - g_loss: 1.5386 - d_real_loss: 0.4856 - d_fake_loss: 0.4253 - d_acc: 0.7995 - kl_divergence: 4.6878
Epoch 270/300
391/391 [==============================] - ETA: 0s - g_loss: 1.5761 - d_real_loss: 0.4768 - d_fake_loss: 0.4150 - d_acc: 0.8083 - kl_divergence: 4.7529
Saving Model Weights At Epoch 270.

391/391 [==============================] - 65s 167ms/step - g_loss: 1.5761 - d_real_loss: 0.4768 - d_fake_loss: 0.4150 - d_acc: 0.8083 - kl_divergence: 4.7529
Epoch 271/300
391/391 [==============================] - 55s 139ms/step - g_loss: 1.5549 - d_real_loss: 0.4739 - d_fake_loss: 0.4166 - d_acc: 0.8126 - kl_divergence: 4.6335
Epoch 272/300
391/391 [==============================] - 55s 141ms/step - g_loss: 1.5456 - d_real_loss: 0.4798 - d_fake_loss: 0.4226 - d_acc: 0.8048 - kl_divergence: 4.7736
Epoch 273/300
391/391 [==============================] - 55s 142ms/step - g_loss: 1.5723 - d_real_loss: 0.4766 - d_fake_loss: 0.4177 - d_acc: 0.8089 - kl_divergence: 4.7639
Epoch 274/300
391/391 [==============================] - 55s 141ms/step - g_loss: 1.5420 - d_real_loss: 0.4736 - d_fake_loss: 0.4138 - d_acc: 0.8112 - kl_divergence: 4.7909
Epoch 275/300
391/391 [==============================] - 55s 141ms/step - g_loss: 1.5703 - d_real_loss: 0.4596 - d_fake_loss: 0.3985 - d_acc: 0.8220 - kl_divergence: 4.8617
Epoch 276/300
391/391 [==============================] - 56s 142ms/step - g_loss: 1.5727 - d_real_loss: 0.4624 - d_fake_loss: 0.4030 - d_acc: 0.8195 - kl_divergence: 4.6939
Epoch 277/300
391/391 [==============================] - 56s 142ms/step - g_loss: 1.5802 - d_real_loss: 0.4716 - d_fake_loss: 0.4127 - d_acc: 0.8116 - kl_divergence: 4.7853
Epoch 278/300
391/391 [==============================] - 56s 142ms/step - g_loss: 1.5422 - d_real_loss: 0.4631 - d_fake_loss: 0.4036 - d_acc: 0.8220 - kl_divergence: 4.8518
Epoch 279/300
391/391 [==============================] - 56s 142ms/step - g_loss: 1.5751 - d_real_loss: 0.4674 - d_fake_loss: 0.4064 - d_acc: 0.8142 - kl_divergence: 4.8066
Epoch 280/300
391/391 [==============================] - ETA: 0s - g_loss: 1.6054 - d_real_loss: 0.4740 - d_fake_loss: 0.4144 - d_acc: 0.8097 - kl_divergence: 4.7994
Saving Model Weights At Epoch 280.

391/391 [==============================] - 65s 167ms/step - g_loss: 1.6054 - d_real_loss: 0.4740 - d_fake_loss: 0.4144 - d_acc: 0.8097 - kl_divergence: 4.7993
Epoch 281/300
391/391 [==============================] - 54s 138ms/step - g_loss: 1.5687 - d_real_loss: 0.4791 - d_fake_loss: 0.4183 - d_acc: 0.8064 - kl_divergence: 4.8275
Epoch 282/300
391/391 [==============================] - 55s 140ms/step - g_loss: 1.5484 - d_real_loss: 0.4674 - d_fake_loss: 0.4103 - d_acc: 0.8178 - kl_divergence: 4.7458
Epoch 283/300
391/391 [==============================] - 55s 140ms/step - g_loss: 1.5615 - d_real_loss: 0.4383 - d_fake_loss: 0.3841 - d_acc: 0.8437 - kl_divergence: 4.4060
Epoch 284/300
391/391 [==============================] - 55s 140ms/step - g_loss: 1.6019 - d_real_loss: 0.4616 - d_fake_loss: 0.4024 - d_acc: 0.8229 - kl_divergence: 4.7112
Epoch 285/300
391/391 [==============================] - 54s 138ms/step - g_loss: 1.5676 - d_real_loss: 0.4607 - d_fake_loss: 0.4021 - d_acc: 0.8239 - kl_divergence: 4.7534
Epoch 286/300
391/391 [==============================] - 55s 141ms/step - g_loss: 1.6123 - d_real_loss: 0.4621 - d_fake_loss: 0.3998 - d_acc: 0.8226 - kl_divergence: 4.7811
Epoch 287/300
391/391 [==============================] - 55s 142ms/step - g_loss: 1.5799 - d_real_loss: 0.4619 - d_fake_loss: 0.4041 - d_acc: 0.8235 - kl_divergence: 4.6769
Epoch 288/300
391/391 [==============================] - 55s 141ms/step - g_loss: 1.6044 - d_real_loss: 0.4595 - d_fake_loss: 0.3996 - d_acc: 0.8251 - kl_divergence: 4.7736
Epoch 289/300
391/391 [==============================] - 55s 140ms/step - g_loss: 1.6035 - d_real_loss: 0.4607 - d_fake_loss: 0.4047 - d_acc: 0.8216 - kl_divergence: 4.7847
Epoch 290/300
391/391 [==============================] - ETA: 0s - g_loss: 1.5655 - d_real_loss: 0.4673 - d_fake_loss: 0.4093 - d_acc: 0.8208 - kl_divergence: 4.7222
Saving Model Weights At Epoch 290.

391/391 [==============================] - 63s 161ms/step - g_loss: 1.5655 - d_real_loss: 0.4673 - d_fake_loss: 0.4093 - d_acc: 0.8208 - kl_divergence: 4.7222
Epoch 291/300
391/391 [==============================] - 54s 138ms/step - g_loss: 1.5918 - d_real_loss: 0.4611 - d_fake_loss: 0.4075 - d_acc: 0.8220 - kl_divergence: 4.6733
Epoch 292/300
391/391 [==============================] - 55s 140ms/step - g_loss: 1.6095 - d_real_loss: 0.4609 - d_fake_loss: 0.4016 - d_acc: 0.8223 - kl_divergence: 4.8032
Epoch 293/300
391/391 [==============================] - 54s 139ms/step - g_loss: 1.6191 - d_real_loss: 0.4664 - d_fake_loss: 0.4063 - d_acc: 0.8172 - kl_divergence: 4.7989
Epoch 294/300
391/391 [==============================] - 54s 139ms/step - g_loss: 1.6498 - d_real_loss: 0.4541 - d_fake_loss: 0.3994 - d_acc: 0.8275 - kl_divergence: 4.8699
Epoch 295/300
391/391 [==============================] - 55s 140ms/step - g_loss: 1.5898 - d_real_loss: 0.4526 - d_fake_loss: 0.3957 - d_acc: 0.8315 - kl_divergence: 4.6952
Epoch 296/300
391/391 [==============================] - 55s 141ms/step - g_loss: 1.6154 - d_real_loss: 0.4497 - d_fake_loss: 0.3946 - d_acc: 0.8318 - kl_divergence: 4.7294
Epoch 297/300
391/391 [==============================] - 55s 142ms/step - g_loss: 1.5745 - d_real_loss: 0.4537 - d_fake_loss: 0.3962 - d_acc: 0.8276 - kl_divergence: 4.7927
Epoch 298/300
391/391 [==============================] - 55s 142ms/step - g_loss: 1.6405 - d_real_loss: 0.4525 - d_fake_loss: 0.3969 - d_acc: 0.8312 - kl_divergence: 4.8042
Epoch 299/300
391/391 [==============================] - 55s 141ms/step - g_loss: 1.6073 - d_real_loss: 0.4564 - d_fake_loss: 0.3997 - d_acc: 0.8267 - kl_divergence: 4.8603
Epoch 300/300
1/1 [==============================] - 1s 1s/steps - g_loss: 1.5998 - d_real_loss: 0.4559 - d_fake_loss: 0.3981 - d_acc: 0.8277 - kl_divergence: 4.82
1/1 [==============================] - 0s 44ms/step
1/1 [==============================] - 0s 41ms/step
1/1 [==============================] - 0s 57ms/step
1/1 [==============================] - 0s 52ms/step
1/1 [==============================] - 0s 42ms/step
1/1 [==============================] - 0s 45ms/step
1/1 [==============================] - 0s 33ms/step
1/1 [==============================] - 0s 38ms/step
1/1 [==============================] - 0s 42ms/step
4/4 [==============================] - 2s 235ms/step
4/4 [==============================] - 1s 157ms/step
Epoch 300: Average (IS): 2.362536907196045 | Std (IS): 0.4808556139469147 | FID Score: 210.86319907896524

Saving Model Weights At Epoch 300.

391/391 [==============================] - 89s 228ms/step - g_loss: 1.5998 - d_real_loss: 0.4559 - d_fake_loss: 0.3981 - d_acc: 0.8277 - kl_divergence: 4.8290

DISPLAYING BEST FID AND INCEPTION SCORES FOR IMPROVED CDCGAN

  • From the FID score, we can see that the improved cDCGAN model managed to achieve better scores as compared to most of the models previously tested, except for our original cDCGAN, meaning that this model is also quite viable when testing for the CIFAR-10 Dataset.
  • However, looking at the KL Divergence, we see that the improved cDCGAN model managed to achieve a lower score of 4.390, as compared to the original cDCGAN of 4.56589.
  • But for Inception Score, our original cDCGAN model managed to achieve a slightly higher IS, indicating that the images generated are higher quality and more diverse.
In [115]:
monitor = callbacks[0]

# Extract the best KL Divergence
best_kl_div = min(history.history['kl_divergence'])

# Extract the best FID Score
best_fid = min(monitor.tunedcdcgan_fid_scores) if monitor.tunedcdcgan_fid_scores else None

# Extract the best IS Score (average)
best_is_avg = max(is_avg for is_avg, _ in monitor.tunedcdcgan_is_scores) if monitor.tunedcdcgan_is_scores else None

# Create a DataFrame to store these best values
tunedcdcgan_df = pd.DataFrame({
    'Best KL Divergence': [best_kl_div],
    'Best FID': [best_fid],
    'Best IS': [best_is_avg]
})

# Display the DataFrame
tunedcdcgan_df
Out[115]:
Best KL Divergence Best FID Best IS
0 4.390045 210.863199 2.588598

PLOTTING THE MODEL'S PERFORMANCE OVER TIME

  • From the KL Divergence, we see that the KL Divergence fluctuates significantly at the beginning of training, indicating instability. Over time, the values seem to stabilize but do not show a clear downward trend, suggesting that the generator is not consistently improving towards producing data with a distribution that closely matches the real data.
  • For the discriminator accuracy, it improves significantly in the early epochs, indicating that it is learning to differentiate between real and fake data effectively. However, as the training progresses, the accuracy plateaus and has some variability but remains relatively high. Ideally, for a GAN, you would want the discriminator's accuracy to be around 50%, which would indicate that the generator is producing images of a quality indistinguishable from real images, making the discriminator's task of classification as difficult as random guessing.
  • For the losses, the generator loss decreases initially and then spikes around epoch 140, indicating a potential issue where the generator's performance worsened significantly at that point. Afterward, the loss decreases and stabilizes, although it does not reach as low as the initial epochs. This spike might indicate an instability or a learning rate that's too high, causing the generator to overshoot optimal points in the parameter space. As for the discriminator losses, it decreases over time and stabilizes, indicating it is consistently getting better for identifying real images as real, and identifying fake images as fake.
In [116]:
plot_model_performance(history)
No description has been provided for this image

LOADING AND TESTING THE GENERATOR WEIGHTS ON SYNTHETIC IMAGES

In [151]:
# Loading and testing the generator's weights
generator.load_weights('modelweights/tuned_cdcgan/epoch_300/generator_weights_epoch_300.h5')
generator.summary()
Model: "Improved_cDCGAN_Generator"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================
 Noise_Input (InputLayer)       [(None, 128)]        0           []                               
                                                                                                  
 Label_Input (InputLayer)       [(None, 1)]          0           []                               
                                                                                                  
 Noise_Dense (Dense)            (None, 2048)         264192      ['Noise_Input[0][0]']            
                                                                                                  
 Label_Embedding (Embedding)    (None, 1, 50)        500         ['Label_Input[0][0]']            
                                                                                                  
 Noise_LeakyReLU (LeakyReLU)    (None, 2048)         0           ['Noise_Dense[0][0]']            
                                                                                                  
 Label_Dense (Dense)            (None, 1, 16)        816         ['Label_Embedding[0][0]']        
                                                                                                  
 Noise_Reshape (Reshape)        (None, 4, 4, 128)    0           ['Noise_LeakyReLU[0][0]']        
                                                                                                  
 Label_Reshape (Reshape)        (None, 4, 4, 1)      0           ['Label_Dense[0][0]']            
                                                                                                  
 Concatenate (Concatenate)      (None, 4, 4, 129)    0           ['Noise_Reshape[0][0]',          
                                                                  'Label_Reshape[0][0]']          
                                                                                                  
 Conv1 (Conv2DTranspose)        (None, 8, 8, 128)    264320      ['Concatenate[0][0]']            
                                                                                                  
 Conv1_BatchNorm (BatchNormaliz  (None, 8, 8, 128)   512         ['Conv1[0][0]']                  
 ation)                                                                                           
                                                                                                  
 Conv1_PReLU (PReLU)            (None, 8, 8, 128)    8192        ['Conv1_BatchNorm[0][0]']        
                                                                                                  
 Conv2 (Conv2DTranspose)        (None, 16, 16, 128)  262272      ['Conv1_PReLU[0][0]']            
                                                                                                  
 Conv2_BatchNorm (BatchNormaliz  (None, 16, 16, 128)  512        ['Conv2[0][0]']                  
 ation)                                                                                           
                                                                                                  
 Conv2_PReLU (PReLU)            (None, 16, 16, 128)  32768       ['Conv2_BatchNorm[0][0]']        
                                                                                                  
 Conv3 (Conv2DTranspose)        (None, 32, 32, 128)  262272      ['Conv2_PReLU[0][0]']            
                                                                                                  
 Conv3_BatchNorm (BatchNormaliz  (None, 32, 32, 128)  512        ['Conv3[0][0]']                  
 ation)                                                                                           
                                                                                                  
 Conv3_PReLU (PReLU)            (None, 32, 32, 128)  131072      ['Conv3_BatchNorm[0][0]']        
                                                                                                  
 Output (Conv2D)                (None, 32, 32, 3)    3459        ['Conv3_PReLU[0][0]']            
                                                                                                  
==================================================================================================
Total params: 1,231,399
Trainable params: 1,230,631
Non-trainable params: 768
__________________________________________________________________________________________________
In [152]:
# Generate random latent vectors and class labels
latent_vectors = tf.random.normal(shape=(100, LATENT_DIM))
class_labels = tf.reshape(tf.range(10), shape=(10, 1))
class_labels = tf.tile(class_labels, multiples=(1, 10))
class_labels = tf.reshape(class_labels, shape=(100, 1))

# Generate images using the loaded generator
generated_images = generator([latent_vectors, class_labels], training=False)
generated_images = (generated_images + 1) / 2

# Create a dictionary to map class labels to their corresponding names
label_map = {
    0: 'Airplane',
    1: 'Automobile',
    2: 'Bird',
    3: 'Cat',
    4: 'Deer',
    5: 'Dog',
    6: 'Frog',
    7: 'Horse',
    8: 'Ship',
    9: 'Truck'
}

# Create a grid of subplots and display generated images with labels
fig, axes = plt.subplots(10, 10, figsize=(20, 20))
axes = axes.flatten()

for i, ax in enumerate(axes):
    ax.imshow(generated_images[i], cmap='gray')
    ax.set_title(label_map[class_labels[i].numpy().item()], fontsize=16)
    ax.axis('off')

plt.tight_layout()
plt.show()
No description has been provided for this image

STORING LATENT SPACE EVOLUTION OF IMAGES FOR TUNED cDCGAN MODEL

In [145]:
# Check if the directory exists, and create it if it doesn't
directory_path = './animation/tuned_cdcgan'
if not os.path.exists(directory_path):
    os.makedirs(directory_path)

# Collect image paths
images = [] 
gan_img_paths = glob.glob("./images/tuned_cdcgan_images/*.png") 

# Load images
for path in gan_img_paths: 
    images.append(imageio.imread(path)) 
    
# Save the GIF in the created directory
imageio.mimsave(os.path.join(directory_path, 'TUNEDCDCGAN.gif'), images, duration=0.2)
filename=os.path.join(directory_path, 'TUNEDCDCGAN.gif')

# Display the GIF
display_gif(filename)
Out[145]:
No description has been provided for this image

ANALYSIS OF MODEL BUILDING PROCESS AND IMPROVEMENT¶

Was applying BatchNormalization, PreLU, and Gaussian Weight Initialization helpful?

  • From our model improvement of running 300 epochs, we can see that the images generated by the improved model was actually relatively decent, as we can roughly make out what some of the images are (more so for vehicles).
  • However, when comparing to our original cDCGAN model, that model architecture managed to achieve slightly better FID and Inception Scores, although the difference is minimal. Yet, one thing to note for this model is that it experienced a collapse around epoch 140 as the losses began to diverge and the quality of the images worsened (which was not as evident from our original cDCGAN model.
  • Also, looking at the KL Divergence values, we see that the values seem to stabilize but do not show a clear downward trend, suggesting that the generator is not consistently improving towards producing data with a distribution that closely matches the real data.

Hence, although the images generated by our improved cDCGAN model are decent, the metrics tell us that the best model we have developed would be our original cDCGAN model + Gradient Penalty, which was improved upon the baseline DCGAN model. So, for displaying our final 1000 colour images from CIFAR-10, we will be using the original cDCGAN model with gradient penalty.

DISPLAYING OF FINAL 1000 COLOUR IMAGES FROM CIFAR-10¶

  • After obtaining our best model, we will finally be displaying 1000 colour images from our CIFAR-10 Dataset, and we have segmented these images into 100 per class to be able to show the quality of our images from each of the classes.
  • To show these images, we will load our best generator weights and generate the respective images using our best model weights obtained from our model development and improvement processes earlier.

RE-INITIALIZING THE GENERATOR FUNCTION FOR EVALUATION

In [173]:
def create_generator(latent_dim):
    # foundation for label embeedded input
    label_input = Input(shape=(1,), name='Label_Input')
    label_embedding = Embedding(10, 10, name='Label_Embedding')(label_input)
    
    # linear activation
    label_embedding = Dense(4 * 4, name='Label_Dense')(label_embedding)

    # reshape to additional channel
    label_embedding = Reshape((4, 4, 1), name='Label_Reshape')(label_embedding)

    # foundation for 4x4 image input
    noise_input = Input(shape=(latent_dim,), name='Noise_Input')
    noise_dense = Dense(4 * 4 * 128, name='Noise_Dense')(noise_input)
    noise_dense = ReLU(name='Noise_ReLU')(noise_dense)
    noise_reshape = Reshape((4, 4, 128), name='Noise_Reshape')(noise_dense)

    # concatenate label embedding and image to produce 129-channel output
    concat = Concatenate(name='Concatenate')([noise_reshape, label_embedding])

    # upsample to 8x8
    conv1 = Conv2DTranspose(128, (4, 4), strides=(2, 2), padding='same', name='Conv1')(concat)
    conv1 = ReLU(name='Conv1_ReLU')(conv1)

    # upsample to 16x16
    conv2 = Conv2DTranspose(128, (4, 4), strides=(2, 2), padding='same', name='Conv2')(conv1)
    conv2 = ReLU(name='Conv2_ReLU')(conv2)

    # upsample to 32x32
    conv3 = Conv2DTranspose(128, (4, 4), strides=(2, 2), padding='same', name='Conv3')(conv2)
    conv3 = ReLU(name='Conv3_ReLU')(conv3)

    # output 32x32x3
    output = Conv2D(3, (3, 3), activation='tanh', padding='same', name='Output')(conv3)
    model = Model(inputs=[noise_input, label_input], outputs=output, name='cDCGAN_Generator')

    return model

LOADING WEIGHTS FOR THE MODELS

In [174]:
weights = 'modelweights/cdcgan/epoch_200/generator_weights_epoch_200.h5'
latent_dim = 128
label_map = {
    0: 'Airplane',
    1: 'Automobile',
    2: 'Bird',
    3: 'Cat',
    4: 'Deer',
    5: 'Dog',
    6: 'Frog',
    7: 'Horse',
    8: 'Ship',
    9: 'Truck'
}
trained_generator = create_generator(latent_dim)
trained_generator.load_weights(weights)
In [175]:
# create tensor with 100 1s
airplane = tf.zeros(shape=(100,), dtype=tf.int32)
automobile = tf.ones(shape=(100,), dtype=tf.int32)
bird = tf.fill(dims=(100,), value=2)
cat = tf.fill(dims=(100,), value=3)
deer = tf.fill(dims=(100,), value=4)
dog = tf.fill(dims=(100,), value=5)
frog = tf.fill(dims=(100,), value=6)
horse = tf.fill(dims=(100,), value=7)
ship = tf.fill(dims=(100,), value=8)
truck = tf.fill(dims=(100,), value=9)

# reshape to 10x10
airplane = tf.reshape(airplane, shape=(100, 1))
automobile = tf.reshape(automobile, shape=(100, 1))
bird = tf.reshape(bird, shape=(100, 1))
cat = tf.reshape(cat, shape=(100, 1))
deer = tf.reshape(deer, shape=(100, 1))
dog = tf.reshape(dog, shape=(100, 1))
frog = tf.reshape(frog, shape=(100, 1))
horse = tf.reshape(horse, shape=(100, 1))
ship = tf.reshape(ship, shape=(100, 1))
truck = tf.reshape(truck, shape=(100, 1))
In [176]:
# Prepare images
generated_airplanes = trained_generator([tf.random.normal(shape=(100, latent_dim)), airplane], training=False)
generated_auto = trained_generator([tf.random.normal(shape=(100, latent_dim)), automobile], training=False)
generated_bird = trained_generator([tf.random.normal(shape=(100, latent_dim)), bird], training=False)
generated_cat = trained_generator([tf.random.normal(shape=(100, latent_dim)), cat], training=False)
generated_deer = trained_generator([tf.random.normal(shape=(100, latent_dim)), deer], training=False)
generated_dog = trained_generator([tf.random.normal(shape=(100, latent_dim)), dog], training=False)
generated_frog = trained_generator([tf.random.normal(shape=(100, latent_dim)), frog], training=False)
generated_horse = trained_generator([tf.random.normal(shape=(100, latent_dim)), horse], training=False)
generated_ship = trained_generator([tf.random.normal(shape=(100, latent_dim)), ship], training=False)
generated_truck = trained_generator([tf.random.normal(shape=(100, latent_dim)), truck], training=False)

# rescale to 0-1
generated_airplanes = (generated_airplanes + 1) / 2
generated_auto = (generated_auto + 1) / 2
generated_bird = (generated_bird + 1) / 2
generated_cat = (generated_cat + 1) / 2
generated_deer = (generated_deer + 1) / 2
generated_dog = (generated_dog + 1) / 2
generated_frog = (generated_frog + 1) / 2
generated_horse = (generated_horse + 1) / 2
generated_ship = (generated_ship + 1) / 2
generated_truck = (generated_truck + 1) / 2
In [177]:
fig, axes = plt.subplots(10, 10, figsize=(20, 20))
axes = axes.flatten()

for i, ax in enumerate(axes):
    ax.imshow(generated_airplanes[i])
    ax.set_title('Airplane', fontsize=18)
    ax.axis('off')

plt.tight_layout()
plt.show()
No description has been provided for this image
In [178]:
fig, axes = plt.subplots(10, 10, figsize=(20, 20))
axes = axes.flatten()

for i, ax in enumerate(axes):
    ax.imshow(generated_auto[i])
    ax.set_title('Automobile', fontsize=18)
    ax.axis('off')

plt.tight_layout()
plt.show()
No description has been provided for this image
In [179]:
fig, axes = plt.subplots(10, 10, figsize=(20, 20))
axes = axes.flatten()

for i, ax in enumerate(axes):
    ax.imshow(generated_bird[i])
    ax.set_title('Bird', fontsize=18)
    ax.axis('off')

plt.tight_layout()
plt.show()
No description has been provided for this image
In [180]:
fig, axes = plt.subplots(10, 10, figsize=(20, 20))
axes = axes.flatten()

for i, ax in enumerate(axes):
    ax.imshow(generated_cat[i])
    ax.set_title('Cat', fontsize=18)
    ax.axis('off')

plt.tight_layout()
plt.show()
No description has been provided for this image
In [181]:
fig, axes = plt.subplots(10, 10, figsize=(20, 20))
axes = axes.flatten()

for i, ax in enumerate(axes):
    ax.imshow(generated_deer[i])
    ax.set_title('Deer', fontsize=18)
    ax.axis('off')

plt.tight_layout()
plt.show()
No description has been provided for this image
In [182]:
fig, axes = plt.subplots(10, 10, figsize=(20, 20))
axes = axes.flatten()

for i, ax in enumerate(axes):
    ax.imshow(generated_dog[i])
    ax.set_title('Dog', fontsize=18)
    ax.axis('off')

plt.tight_layout()
plt.show()
No description has been provided for this image
In [183]:
fig, axes = plt.subplots(10, 10, figsize=(20, 20))
axes = axes.flatten()

for i, ax in enumerate(axes):
    ax.imshow(generated_frog[i])
    ax.set_title('Frog', fontsize=18)
    ax.axis('off')

plt.tight_layout()
plt.show()
No description has been provided for this image
In [184]:
fig, axes = plt.subplots(10, 10, figsize=(20, 20))
axes = axes.flatten()

for i, ax in enumerate(axes):
    ax.imshow(generated_horse[i])
    ax.set_title('Horse', fontsize=18)
    ax.axis('off')

plt.tight_layout()
plt.show()
No description has been provided for this image
In [185]:
fig, axes = plt.subplots(10, 10, figsize=(20, 20))
axes = axes.flatten()

for i, ax in enumerate(axes):
    ax.imshow(generated_ship[i])
    ax.set_title('Ship', fontsize=18)
    ax.axis('off')

plt.tight_layout()
plt.show()
No description has been provided for this image
In [186]:
fig, axes = plt.subplots(10, 10, figsize=(20, 20))
axes = axes.flatten()

for i, ax in enumerate(axes):
    ax.imshow(generated_truck[i])
    ax.set_title('Truck', fontsize=18)
    ax.axis('off')

plt.tight_layout()
plt.show()
No description has been provided for this image

ANALYSIS OF FINAL IMAGES GENERATED¶

Upon generating all our images, we can see that many of the images are actually fairly realistic, but some of the images still contain distorted features that causes the images to be less distinct when comparing by class.

There have been some issues noted with the images generated by our models.

  • Some of the wheels of the vehicles are warped.
  • Some of the animals like the horses and deer have very thin or fat bodies.
  • Some of the images are somewhat corrupted by the background and noise.
  • Many of the animals also show weird animal parts being combined together.

INTERESTING INSIGHT FROM IMAGES GENERATED :

Something interesting to note is that our model performed better at generating vehicles (i.e. trucks, ships, automobiles etc.) as compared to the animal classes like cats and dogs. Although our model managed to learn the shape and texture of the animal relatively well, some of the shapes are still warped. This is likely due to the variety of images and styles that the classes have and the model could have a hard time trying to form the parts of the animal.

CONCLUSION OF GAN ANALYSIS : CIFAR-10 DATASET¶

From our analysis with Generative Adversarial Networks for the CIFAR-10 Colored Images Dataset, we can conclude the following :

  1. Generating animals poses a greater challenge.

We've observed that when it comes to the 10 distinct classes, generating non-living objects like vehicles is notably more straightforward compared to generating animals from the dataset. One reason for this could be that vehicles, such as trucks and automobiles, tend to have more angular or cuboid shapes, and they share relatively consistent features across real images in the dataset. In contrast, reproducing the intricate details of animals seems to be a more complex task, with some generated images only capturing plausible colors, textures, or shapes of the subject but not all of these aspects simultaneously. This challenge may stem from the significant variation among training examples within each animal class, as different animal breeds can exhibit very distinct features within their broader class categories.

  1. Enhancing Conditional Image Synthesis and Normalization

We can exert control over the class or properties of the generated images by incorporating class label information during training. Previous experiments with other GAN variants, like the AC-GAN, have demonstrated increased stability in training when auxiliary information is included. Therefore, it would be beneficial to explore GAN architectures that offer more fine-grained control over both the class and style of the synthesized images, such as the InfoGAN proposed by Chen et al. in 2016. Additionally, further investigations can be conducted into the use of normalization techniques for the discriminator, such as instance normalization, as spectral normalization has proven to be effective in stabilizing training.

  1. Potential for Future Enhancements

Given additional time and resources, I would consider experimenting with concepts borrowed from reinforcement learning. This might involve implementing techniques like experience replay, where previously generated images are occasionally introduced into the training process to be discriminated. Other stability-enhancing tricks used in reinforcement learning, as described by Pfau and Vinyals in 2017, could also be explored to potentially improve GAN training and performance.